Basic MathGuides

Unlocking Real Analysis: Foundations, Theorems, and Modern Applications

Real Analysis Notes

Real Analysis: Comprehensive Notes

1. Introduction to Real Analysis

Real Analysis is the branch of mathematics dealing with the real numbers and the functions defined on sets of real numbers. It involves the study of properties like limits, continuity, derivatives, and integrals at a rigorous level.

Throughout these notes, we'll focus on not just understanding the concepts but also on different approaches to solving Real Analysis problems.

1.1. Sets and Set Operations

Definition: Set

A set is a well-defined collection of distinct objects. In Real Analysis, we typically work with sets of real numbers.

Example: Different Types of Sets

1. \(\mathbb{N} = \{1, 2, 3, ...\}\) (Natural numbers)

2. \(\mathbb{Z} = \{..., -2, -1, 0, 1, 2, ...\}\) (Integers)

3. \(\mathbb{Q} = \{\frac{a}{b} : a, b \in \mathbb{Z}, b \neq 0\}\) (Rational numbers)

4. \(\mathbb{R}\) (Real numbers)

5. \([a, b] = \{x \in \mathbb{R} : a \leq x \leq b\}\) (Closed interval)

6. \((a, b) = \{x \in \mathbb{R} : a < x < b\}\) (Open interval)

Example: Set Operations

Let \(A = \{1, 2, 3\}\) and \(B = \{2, 3, 4\}\).

1. Union: \(A \cup B = \{1, 2, 3, 4\}\)

2. Intersection: \(A \cap B = \{2, 3\}\)

3. Difference: \(A \setminus B = \{1\}\)

4. Complement: If the universal set is \(\mathbb{Z}\), then \(A^c = \mathbb{Z} \setminus A\)

1.2. The Real Number System

Definition: Completeness of Real Numbers

The real number system \(\mathbb{R}\) is complete, meaning every non-empty set of real numbers that is bounded above has a least upper bound (supremum).

Theorem: Archimedean Property

For any real number \(x\), there exists a natural number \(n\) such that \(n > x\).

Example: Finding Supremum and Infimum

Consider the set \(S = \{1 - \frac{1}{n} : n \in \mathbb{N}\}\).

To find the supremum (least upper bound):

As \(n\) increases, \(1 - \frac{1}{n}\) approaches 1 from below:

\(1 - \frac{1}{1} = 0\)

\(1 - \frac{1}{2} = 0.5\)

\(1 - \frac{1}{3} \approx 0.67\)

Since \(1 - \frac{1}{n} < 1\) for all \(n\), but can get arbitrarily close to 1, we have \(\sup(S) = 1\).

For the infimum (greatest lower bound), note that \(1 - \frac{1}{n}\) is smallest when \(n = 1\), giving \(\inf(S) = 0\).

2. Sequences and Series

2.1. Convergence of Sequences

Definition: Sequence

A sequence is a function \(a: \mathbb{N} \rightarrow \mathbb{R}\), typically denoted as \(\{a_n\}_{n=1}^{\infty}\) or simply \(\{a_n\}\).

Definition: Convergence of a Sequence

A sequence \(\{a_n\}\) converges to a limit \(L\) if for every \(\epsilon > 0\), there exists an \(N \in \mathbb{N}\) such that for all \(n \geq N\), \(|a_n - L| < \epsilon\).

Example: Proving Convergence Using the Definition

Prove that the sequence \(a_n = \frac{1}{n}\) converges to 0.

We need to show that for any \(\epsilon > 0\), there exists \(N \in \mathbb{N}\) such that \(|a_n - 0| < \epsilon\) whenever \(n \geq N\).

Since \(|a_n - 0| = |\frac{1}{n}| = \frac{1}{n}\), we need to find \(N\) such that \(\frac{1}{n} < \epsilon\) for all \(n \geq N\).

This is equivalent to \(n > \frac{1}{\epsilon}\).

So we can choose \(N = \lceil \frac{1}{\epsilon} \rceil\), the smallest integer greater than or equal to \(\frac{1}{\epsilon}\).

Then for all \(n \geq N\), we have \(n \geq \frac{1}{\epsilon}\), which means \(\frac{1}{n} \leq \epsilon\).

Therefore, \(a_n = \frac{1}{n}\) converges to 0.

Theorem: Properties of Convergent Sequences

If \(\{a_n\}\) and \(\{b_n\}\) are convergent sequences with \(\lim a_n = L\) and \(\lim b_n = M\), then:

1. \(\lim (a_n + b_n) = L + M\)

2. \(\lim (a_n \cdot b_n) = L \cdot M\)

3. \(\lim (\alpha a_n) = \alpha L\) for any constant \(\alpha\)

4. If \(M \neq 0\), then \(\lim \frac{a_n}{b_n} = \frac{L}{M}\)

Example: Calculating Limits Using Algebraic Properties

Find \(\lim_{n \to \infty} \frac{3n^2 + 2n + 1}{n^2 + 5n}\)

First, divide both the numerator and denominator by \(n^2\) (the highest power):

\(\lim_{n \to \infty} \frac{3n^2 + 2n + 1}{n^2 + 5n} = \lim_{n \to \infty} \frac{3 + \frac{2}{n} + \frac{1}{n^2}}{1 + \frac{5}{n}}\)

As \(n \to \infty\), \(\frac{2}{n} \to 0\), \(\frac{1}{n^2} \to 0\), and \(\frac{5}{n} \to 0\)

Therefore, \(\lim_{n \to \infty} \frac{3n^2 + 2n + 1}{n^2 + 5n} = \frac{3 + 0 + 0}{1 + 0} = 3\)

2.2. Infinite Series

Definition: Infinite Series

An infinite series is the sum of the terms of an infinite sequence, denoted by \(\sum_{n=1}^{\infty} a_n\).

Definition: Convergence of Series

The series \(\sum_{n=1}^{\infty} a_n\) converges if the sequence of partial sums \(S_N = \sum_{n=1}^{N} a_n\) converges as \(N \to \infty\).

Example: Geometric Series

The geometric series \(\sum_{n=0}^{\infty} ar^n\) converges to \(\frac{a}{1-r}\) if \(|r| < 1\) and diverges if \(|r| \geq 1\).

The partial sum is \(S_N = a + ar + ar^2 + \ldots + ar^N\)

Multiplying by \(r\): \(rS_N = ar + ar^2 + \ldots + ar^N + ar^{N+1}\)

Subtracting: \(S_N - rS_N = a - ar^{N+1}\)

\(S_N(1-r) = a(1-r^{N+1})\)

\(S_N = a\frac{1-r^{N+1}}{1-r}\)

If \(|r| < 1\), then \(r^{N+1} \to 0\) as \(N \to \infty\)

Therefore, \(\lim_{N \to \infty} S_N = \frac{a}{1-r}\)

2.3. Convergence Tests

Theorem: The Comparison Test

If \(0 \leq a_n \leq b_n\) for all \(n \geq N\) (some fixed \(N\)):

1. If \(\sum b_n\) converges, then \(\sum a_n\) converges.

2. If \(\sum a_n\) diverges, then \(\sum b_n\) diverges.

Theorem: The Ratio Test

Let \(\sum a_n\) be a series with \(a_n \neq 0\) for all \(n\). If \(\lim_{n \to \infty} \left|\frac{a_{n+1}}{a_n}\right| = L\), then:

1. If \(L < 1\), the series converges absolutely.

2. If \(L > 1\) or \(L = \infty\), the series diverges.

3. If \(L = 1\), the test is inconclusive.

Example: Testing for Convergence Using the Ratio Test

Determine whether the series \(\sum_{n=1}^{\infty} \frac{n^2}{3^n}\) converges.

Let's apply the ratio test:

\(\lim_{n \to \infty} \left|\frac{a_{n+1}}{a_n}\right| = \lim_{n \to \infty} \left|\frac{\frac{(n+1)^2}{3^{n+1}}}{\frac{n^2}{3^n}}\right|\)

\(= \lim_{n \to \infty} \left|\frac{(n+1)^2}{3^{n+1}} \cdot \frac{3^n}{n^2}\right|\)

\(= \lim_{n \to \infty} \left|\frac{(n+1)^2}{3 \cdot n^2}\right|\)

\(= \lim_{n \to \infty} \frac{1}{3} \left|\frac{(n+1)^2}{n^2}\right|\)

\(= \lim_{n \to \infty} \frac{1}{3} \left|\frac{n^2 + 2n + 1}{n^2}\right|\)

\(= \lim_{n \to \infty} \frac{1}{3} \left|1 + \frac{2}{n} + \frac{1}{n^2}\right|\)

\(= \frac{1}{3} \cdot 1 = \frac{1}{3}\)

Since \(L = \frac{1}{3} < 1\), the series converges by the ratio test.

Theorem: The Root Test

For a series \(\sum a_n\), if \(\lim_{n \to \infty} \sqrt[n]{|a_n|} = L\), then:

1. If \(L < 1\), the series converges absolutely.

2. If \(L > 1\) or \(L = \infty\), the series diverges.

3. If \(L = 1\), the test is inconclusive.

Example: Testing for Convergence Using Multiple Methods

Determine whether \(\sum_{n=1}^{\infty} \frac{n}{2^n}\) converges.

Method 1: Ratio Test

\(\lim_{n \to \infty} \left|\frac{a_{n+1}}{a_n}\right| = \lim_{n \to \infty} \left|\frac{\frac{n+1}{2^{n+1}}}{\frac{n}{2^n}}\right|\)

\(= \lim_{n \to \infty} \left|\frac{n+1}{2^{n+1}} \cdot \frac{2^n}{n}\right|\)

\(= \lim_{n \to \infty} \left|\frac{n+1}{2n}\right|\)

\(= \lim_{n \to \infty} \frac{1}{2} \left|1 + \frac{1}{n}\right|\)

\(= \frac{1}{2} \cdot 1 = \frac{1}{2}\)

Since \(L = \frac{1}{2} < 1\), the series converges by the ratio test.

Method 2: Comparison with a Geometric Series

For \(n \geq 1\), we have \(\frac{n}{2^n} \leq \frac{n}{2^{n-1}} = \frac{n}{2^n} \cdot 2 = \frac{2n}{2^n}\)

For large \(n\), say \(n \geq 4\), we have \(2n \leq 2^n\)

Therefore, \(\frac{n}{2^n} \leq \frac{2n}{2^n} \leq \frac{2^n}{2^n} = 1\) for \(n \geq 4\)

Since \(\sum_{n=4}^{\infty} 1\) diverges, this comparison isn't helpful.

Method 3: Root Test

\(\lim_{n \to \infty} \sqrt[n]{\left|\frac{n}{2^n}\right|} = \lim_{n \to \infty} \sqrt[n]{n} \cdot \sqrt[n]{\frac{1}{2^n}} = \lim_{n \to \infty} \sqrt[n]{n} \cdot \frac{1}{2}\)

Since \(\lim_{n \to \infty} \sqrt[n]{n} = 1\), we have \(\lim_{n \to \infty} \sqrt[n]{\left|\frac{n}{2^n}\right|} = \frac{1}{2}\)

Since \(L = \frac{1}{2} < 1\), the series converges by the root test.

3. Limits and Continuity

3.1. Limits of Functions

Definition: Limit of a Function

We say \(\lim_{x \to a} f(x) = L\) if for every \(\epsilon > 0\), there exists a \(\delta > 0\) such that if \(0 < |x - a| < \delta\), then \(|f(x) - L| < \epsilon\).

Example: Proving a Limit Using the \(\epsilon\)-\(\delta\) Definition

Prove that \(\lim_{x \to 2} (3x - 1) = 5\).

We need to show that for any \(\epsilon > 0\), there exists a \(\delta > 0\) such that if \(0 < |x - 2| < \delta\), then \(|(3x - 1) - 5| < \epsilon\).

Simplifying: \(|(3x - 1) - 5| = |3x - 6| = 3|x - 2|\)

So we need \(3|x - 2| < \epsilon\), which is equivalent to \(|x - 2| < \frac{\epsilon}{3}\)

Therefore, for any \(\epsilon > 0\), we can choose \(\delta = \frac{\epsilon}{3}\).

Then, if \(0 < |x - 2| < \delta = \frac{\epsilon}{3}\), we have \(|(3x - 1) - 5| = 3|x - 2| < 3 \cdot \frac{\epsilon}{3} = \epsilon\)

This proves that \(\lim_{x \to 2} (3x - 1) = 5\).

Theorem: Properties of Limits

If \(\lim_{x \to a} f(x) = L\) and \(\lim_{x \to a} g(x) = M\), then:

1. \(\lim_{x \to a} [f(x) + g(x)] = L + M\)

2. \(\lim_{x \to a} [f(x) \cdot g(x)] = L \cdot M\)

3. \(\lim_{x \to a} [\alpha f(x)] = \alpha L\) for any constant \(\alpha\)

4. If \(M \neq 0\), then \(\lim_{x \to a} \frac{f(x)}{g(x)} = \frac{L}{M}\)

Example: Calculating a Limit Using L'Hôpital's Rule

Calculate \(\lim_{x \to 0} \frac{\sin x}{x}\).

Direct substitution gives \(\frac{\sin 0}{0}\), which is an indeterminate form \(\frac{0}{0}\).

Using L'Hôpital's rule, we differentiate both numerator and denominator:

\(\lim_{x \to 0} \frac{\sin x}{x} = \lim_{x \to 0} \frac{\frac{d}{dx}(\sin x)}{\frac{d}{dx}(x)} = \lim_{x \to 0} \frac{\cos x}{1} = \cos 0 = 1\)

Therefore, \(\lim_{x \to 0} \frac{\sin x}{x} = 1\).

3.2. Continuity

Definition: Continuity at a Point

A function \(f\) is continuous at a point \(a\) if:

1. \(f(a)\) is defined

2. \(\lim_{x \to a} f(x)\) exists

3. \(\lim_{x \to a} f(x) = f(a)\)

Theorem: Intermediate Value Theorem

If \(f\) is continuous on the closed interval \([a, b]\) and \(f(a) \neq f(b)\), then for any value \(y\) between \(f(a)\) and \(f(b)\), there exists a point \(c \in (a, b)\) such that \(f(c) = y\).

Example: Applying the Intermediate Value Theorem

Prove that the equation \(x^3 - 4x + 1 = 0\) has a solution in the interval \([0, 1]\).

Let \(f(x) = x^3 - 4x + 1\).

\(f(0) = 0^3 - 4 \cdot 0 + 1 = 1\)

\(f(1) = 1^3 - 4 \cdot 1 + 1 = 1 - 4 + 1 = -2\)

Since \(f\) is a polynomial, it is continuous on \([0, 1]\).

We have \(f(0) = 1 > 0\) and \(f(1) = -2 < 0\).

By the Intermediate Value Theorem, there exists \(c \in (0, 1)\) such that \(f(c) = 0\).

Therefore, the equation \(x^3 - 4x + 1 = 0\) has a solution in the interval \([0, 1]\).

3.3. Uniform Continuity

Definition: Uniform Continuity

A function \(f\) is uniformly continuous on a set \(E\) if for every \(\epsilon > 0\), there exists a \(\delta > 0\) such that for all \(x, y \in E\) with \(|x - y| < \delta\), we have \(|f(x) - f(y)| < \epsilon\).

Theorem: Uniform Continuity on Compact Sets

If \(f\) is continuous on a compact set \(K\), then \(f\) is uniformly continuous on \(K\).

Example: Proving a Function is Not Uniformly Continuous

Show that \(f(x) = x^2\) is not uniformly continuous on \(\mathbb{R}\).

To show \(f\) is not uniformly continuous, we need to find an \(\epsilon > 0\) such that for any \(\delta > 0\), there exist \(x, y \in \mathbb{R}\) with \(|x - y| < \delta\) but \(|f(x) - f(y)| \geq \epsilon\).

Let's choose \(\epsilon = 1\).

For any \(\delta > 0\), choose \(x = n\) and \(y = n + \frac{\delta}{2}\) for some sufficiently large \(n\).

Then \(|x - y| = |n - (n + \frac{\delta}{2})| = \frac{\delta}{2} < \delta\).

But \(|f(x) - f(y)| = |n^2 - (n + \frac{\delta}{2})^2|\)

\(= |n^2 - (n^2 + n\delta + \frac{\delta^2}{4})|\)

\(= |-(n\delta + \frac{\delta^2}{4})|\)

\(= n\delta + \frac{\delta^2}{4}\)

For \(n > \frac{1}{\delta} - \frac{\delta}{4}\), we have \(n\delta + \frac{\delta^2}{4} > 1 = \epsilon\).

Therefore, \(f(x) = x^2\) is not uniformly continuous on \(\mathbb{R}\).

4. Differentiation

4.1. The Derivative

Definition: Derivative

The derivative of a function \(f\) at a point \(a\) is defined as:

\(f'(a) = \lim_{h \to 0} \frac{f(a+h) - f(a)}{h}\)

if this limit exists.

Example: Finding a Derivative from the Definition

Find the derivative of \(f(x) = x^2\) at \(x = a\) using the definition.

\(f'(a) = \lim_{h \to 0} \frac{f(a+h) - f(a)}{h}\)

\(= \lim_{h \to 0} \frac{(a+h)^2 - a^2}{h}\)

\(= \lim_{h \to 0} \frac{a^2 + 2ah + h^2 - a^2}{h}\)

\(= \lim_{h \to 0} \frac{2ah + h^2}{h}\)

\(= \lim_{h \to 0} (2a + h)\)

\(= 2a\)

Therefore, \(f'(a) = 2a\).

Theorem: Differentiability Implies Continuity

If a function \(f\) is differentiable at a point \(a\), then \(f\) is continuous at \(a\).

4.2. Mean Value Theorem

Theorem: Rolle's Theorem

If \(f\) is continuous on \([a, b]\), differentiable on \((a, b)\), and \(f(a) = f(b)\), then there exists a point \(c \in (a, b)\) such that \(f'(c) = 0\).

Theorem: Mean Value Theorem

If \(f\) is continuous on \([a, b]\) and differentiable on \((a, b)\), then there exists a point \(c \in (a, b)\) such that \(f'(c) = \frac{f(b) - f(a)}{b - a}\).

Example: Applying the Mean Value Theorem

Apply the Mean Value Theorem to \(f(x) = x^3\) on the interval \([1, 3]\).

\(f\) is a polynomial, so it's continuous on \([1, 3]\) and differentiable on \((1, 3)\).

By the Mean Value Theorem, there exists \(c \in (1, 3)\) such that:

\(f'(c) = \frac{f(3) - f(1)}{3 - 1}\)

\(3c^2 = \frac{3^3 - 1^3}{2}\)

\(3c^2 = \frac{27 - 1}{2} = 13\)

\(c^2 = \frac{13}{3}\)

\(c = \sqrt{\frac{13}{3}} \approx 2.08\)

Since \(c \in (1, 3)\), we confirm that the Mean Value Theorem is satisfied.

4.3. Taylor's Theorem

Theorem: Taylor's Theorem

If \(f\) is \(n+1\) times differentiable on an interval containing \(a\) and \(x\), then:

\(f(x) = \sum_{k=0}^{n} \frac{f^{(k)}(a)}{k!}(x-a)^k + R_n(x)\)

where \(R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!}(x-a)^{n+1}\) for some \(\xi\) between \(a\) and \(x\).

Example: Taylor Series Expansion

Find the Taylor series of \(f(x) = e^x\) around \(a = 0\).

For \(f(x) = e^x\), we have \(f^{(k)}(x) = e^x\) for all \(k\).

At \(a = 0\), \(f^{(k)}(0) = e^0 = 1\) for all \(k\).

The Taylor series is given by:

\(e^x = \sum_{k=0}^{\infty} \frac{f^{(k)}(0)}{k!}x^k = \sum_{k=0}^{\infty} \frac{1}{k!}x^k = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots\)

Example: Error Estimation in Taylor Approximation

Approximate \(\sin(0.1)\) using the third-degree Taylor polynomial around \(a = 0\) and estimate the error.

For \(f(x) = \sin x\), we have:

\(f^{(0)}(x) = \sin x \Rightarrow f^{(0)}(0) = 0\)

\(f^{(1)}(x) = \cos x \Rightarrow f^{(1)}(0) = 1\)

\(f^{(2)}(x) = -\sin x \Rightarrow f^{(2)}(0) = 0\)

\(f^{(3)}(x) = -\cos x \Rightarrow f^{(3)}(0) = -1\)

\(f^{(4)}(x) = \sin x \Rightarrow |f^{(4)}(x)| \leq 1\) for all \(x\)

The third-degree Taylor polynomial is:

\(P_3(x) = f(0) + f'(0)x + \frac{f''(0)}{2!}x^2 + \frac{f'''(0)}{3!}x^3 = 0 + x + 0 - \frac{x^3}{6} = x - \frac{x^3}{6}\)

So, \(\sin(0.1) \approx P_3(0.1) = 0.1 - \frac{(0.1)^3}{6} = 0.1 - \frac{0.001}{6} \approx 0.0998333\)

The error term is bounded by:

\(|R_3(0.1)| \leq \frac{\max|f^{(4)}(\xi)|}{4!}|0.1|^4 \leq \frac{1}{24} \cdot (0.1)^4 = \frac{10^{-4}}{24} \approx 4.167 \times 10^{-6}\)

Therefore, \(|\sin(0.1) - 0.0998333| \leq 4.167 \times 10^{-6}\).

5. Integration

5.1. Riemann Integration

Definition: Riemann Sum

For a partition \(P = \{x_0, x_1, \ldots, x_n\}\) of \([a, b]\) and a selection of points \(\{t_1, t_2, \ldots, t_n\}\) with \(t_i \in [x_{i-1}, x_i]\), the Riemann sum is:

\(S(f, P) = \sum_{i=1}^{n} f(t_i) \Delta x_i\), where \(\Delta x_i = x_i - x_{i-1}\).

Definition: Riemann Integral

A function \(f\) is Riemann integrable on \([a, b]\) if there exists a value \(I\) such that for every \(\epsilon > 0\), there exists a \(\delta > 0\) such that for any partition \(P\) with \(||P|| < \delta\) and any selection of points \(\{t_i\}\), we have \(|S(f, P) - I| < \epsilon\).

This value \(I\) is denoted by \(\int_a^b f(x)dx\).

Theorem: Integrability Criteria

If \(f\) is continuous on \([a, b]\), then \(f\) is Riemann integrable on \([a, b]\).

Example: Calculating a Riemann Integral

Calculate \(\int_0^1 x^2 dx\) using the definition of the Riemann integral.

Let's partition \([0, 1]\) into \(n\) equal subintervals: \(x_i = \frac{i}{n}\) for \(i = 0, 1, \ldots, n\).

\(\Delta x_i = \frac{1}{n}\) for all \(i\).

Let's choose \(t_i = \frac{i}{n}\) for each subinterval.

The Riemann sum is:

\(S(f, P) = \sum_{i=1}^{n} f(t_i) \Delta x_i = \sum_{i=1}^{n} \left(\frac{i}{n}\right)^2 \cdot \frac{1}{n} = \frac{1}{n^3} \sum_{i=1}^{n} i^2\)

Using the formula for the sum of squares: \(\sum_{i=1}^{n} i^2 = \frac{n(n+1)(2n+1)}{6}\)

\(S(f, P) = \frac{1}{n^3} \cdot \frac{n(n+1)(2n+1)}{6} = \frac{(n+1)(2n+1)}{6n^2}\)

As \(n \to \infty\), we have \(\lim_{n \to \infty} S(f, P) = \lim_{n \to \infty} \frac{(n+1)(2n+1)}{6n^2} = \frac{1 \cdot 2}{6} = \frac{1}{3}\)

Therefore, \(\int_0^1 x^2 dx = \frac{1}{3}\).

5.2. Fundamental Theorem of Calculus

Theorem: Fundamental Theorem of Calculus (Part 1)

If \(f\) is continuous on \([a, b]\) and \(F(x) = \int_a^x f(t)dt\), then \(F\) is differentiable on \((a, b)\) and \(F'(x) = f(x)\).

Theorem: Fundamental Theorem of Calculus (Part 2)

If \(f\) is continuous on \([a, b]\) and \(F\) is any antiderivative of \(f\) on \([a, b]\), then \(\int_a^b f(x)dx = F(b) - F(a)\).

Example: Application of the Fundamental Theorem

Evaluate \(\int_1^4 \frac{1}{x}dx\).

We know that \(F(x) = \ln|x|\) is an antiderivative of \(f(x) = \frac{1}{x}\).

By the Fundamental Theorem of Calculus (Part 2):

\(\int_1^4 \frac{1}{x}dx = F(4) - F(1) = \ln|4| - \ln|1| = \ln 4 - 0 = \ln 4 \approx 1.386\)

Example: Computing a Definite Integral

Calculate \(\int_0^{\pi/2} \sin^2 x dx\).

We know the identity \(\sin^2 x = \frac{1 - \cos 2x}{2}\).

\(\int_0^{\pi/2} \sin^2 x dx = \int_0^{\pi/2} \frac{1 - \cos 2x}{2} dx = \frac{1}{2} \int_0^{\pi/2} (1 - \cos 2x) dx\)

\(= \frac{1}{2} \left[ x - \frac{\sin 2x}{2} \right]_0^{\pi/2}\)

\(= \frac{1}{2} \left[ \frac{\pi}{2} - \frac{\sin \pi}{2} - (0 - \frac{\sin 0}{2}) \right]\)

\(= \frac{1}{2} \left[ \frac{\pi}{2} - 0 - 0 \right] = \frac{\pi}{4}\)

6. Metric Spaces

6.1. Basic Topology

Definition: Metric Space

A metric space is a pair \((X,d)\) where \(X\) is a set and \(d: X \times X \rightarrow \mathbb{R}\) is a function (called a metric) satisfying:

1. \(d(x,y) \geq 0\) for all \(x,y \in X\) (non-negativity)

2. \(d(x,y) = 0\) if and only if \(x = y\) (identity of indiscernibles)

3. \(d(x,y) = d(y,x)\) for all \(x,y \in X\) (symmetry)

4. \(d(x,z) \leq d(x,y) + d(y,z)\) for all \(x,y,z \in X\) (triangle inequality)

Example: Common Metrics

1. Euclidean metric on \(\mathbb{R}^n\):

\(d(x,y) = \sqrt{\sum_{i=1}^{n} (x_i - y_i)^2}\)

2. Manhattan metric on \(\mathbb{R}^n\):

\(d(x,y) = \sum_{i=1}^{n} |x_i - y_i|\)

3. Maximum metric on \(\mathbb{R}^n\):

\(d(x,y) = \max_{1 \leq i \leq n} |x_i - y_i|\)

4. Discrete metric on any set \(X\):

\(d(x,y) = \begin{cases} 0, & \text{if } x = y \\ 1, & \text{if } x \neq y \end{cases}\)

Definition: Open Ball

In a metric space \((X,d)\), the open ball centered at \(x \in X\) with radius \(r > 0\) is the set:

\(B(x,r) = \{y \in X : d(x,y) < r\}\)

Definition: Open Set

A subset \(U\) of a metric space \((X,d)\) is open if for each \(x \in U\), there exists \(r > 0\) such that \(B(x,r) \subseteq U\).

Definition: Closed Set

A subset \(F\) of a metric space \((X,d)\) is closed if its complement \(X \setminus F\) is open.

Example: Open and Closed Sets

In \(\mathbb{R}\) with the usual metric:

1. The interval \((a,b)\) is open.

2. The interval \([a,b]\) is closed.

3. The interval \([a,b)\) is neither open nor closed.

4. The sets \(\mathbb{R}\) and \(\emptyset\) are both open and closed (clopen).

6.2. Compactness

Definition: Open Cover

An open cover of a set \(A\) in a metric space \((X,d)\) is a collection of open sets \(\{U_\alpha\}_{\alpha \in I}\) such that \(A \subseteq \bigcup_{\alpha \in I} U_\alpha\).

Definition: Compact Set

A set \(K\) in a metric space \((X,d)\) is compact if every open cover of \(K\) has a finite subcover.

Theorem: Heine-Borel Theorem

A subset of \(\mathbb{R}^n\) is compact if and only if it is closed and bounded.

Example: Identifying Compact Sets

Which of the following sets are compact in \(\mathbb{R}\) with the usual metric?

1. \([0,1]\)

2. \((0,1)\)

3. \([0,\infty)\)

4. \(\{1/n : n \in \mathbb{N}\} \cup \{0\}\)

1. \([0,1]\) is closed and bounded, so it is compact by the Heine-Borel Theorem.

2. \((0,1)\) is not closed, so it is not compact.

3. \([0,\infty)\) is not bounded, so it is not compact.

4. \(\{1/n : n \in \mathbb{N}\} \cup \{0\}\) is bounded. It's also closed because any sequence in this set that converges must converge to a point in the set. Thus, it is compact.

6.3. Completeness

Definition: Cauchy Sequence

A sequence \(\{x_n\}\) in a metric space \((X,d)\) is Cauchy if for every \(\epsilon > 0\), there exists \(N \in \mathbb{N}\) such that for all \(m, n \geq N\), we have \(d(x_m, x_n) < \epsilon\).

Definition: Complete Metric Space

A metric space \((X,d)\) is complete if every Cauchy sequence in \(X\) converges to a point in \(X\).

Theorem: Completeness of \(\mathbb{R}\)

The real number system \(\mathbb{R}\) with the usual metric is complete.

Example: Incomplete Metric Space

Show that \(\mathbb{Q}\) with the usual metric is not complete.

Consider the sequence \(x_n = (1 + \frac{1}{n})^n\) in \(\mathbb{Q}\).

It can be shown that this sequence converges to \(e\), which is irrational.

We can also prove it's a Cauchy sequence directly by analyzing \(|x_m - x_n|\).

Since this Cauchy sequence in \(\mathbb{Q}\) converges to a point not in \(\mathbb{Q}\), we conclude that \(\mathbb{Q}\) is not complete.

Theorem: Banach Fixed Point Theorem

Let \((X,d)\) be a complete metric space and \(T: X \rightarrow X\) be a contraction mapping (i.e., there exists \(0 \leq c < 1\) such that \(d(T(x), T(y)) \leq c \cdot d(x,y)\) for all \(x,y \in X\)). Then \(T\) has a unique fixed point in \(X\).

Example: Application of Banach Fixed Point Theorem

Show that the equation \(x = \cos(x)\) has a unique solution.

Define \(T(x) = \cos(x)\) on \(\mathbb{R}\).

For any \(x, y \in \mathbb{R}\), by the Mean Value Theorem:

\(|T(x) - T(y)| = |\cos(x) - \cos(y)| = |{-\sin(\xi)}| \cdot |x - y| \leq |x - y|\)

where \(\xi\) is between \(x\) and \(y\).

Since \(|\sin(\xi)| \leq 1\), we have \(|T(x) - T(y)| \leq |x - y|\).

This doesn't yet show that \(T\) is a contraction. However, \(T'(x) = -\sin(x)\), and if we restrict to an interval like \([0, \pi/2]\), then \(|T'(x)| \leq \sin(\pi/2) = 1\).

By applying \(T\) twice, we get \(T^2(x) = \cos(\cos(x))\).

By the chain rule, \((T^2)'(x) = -\sin(\cos(x)) \cdot (-\sin(x)) = \sin(\cos(x)) \cdot \sin(x)\).

Since \(|\sin(\cos(x))| \leq \sin(1) < 1\) and \(|\sin(x)| \leq 1\), we have \(|(T^2)'(x)| \leq \sin(1) < 1\).

Therefore, \(T^2\) is a contraction on \(\mathbb{R}\), and by the Banach Fixed Point Theorem, \(T^2\) has a unique fixed point \(p\).

But if \(T^2(p) = p\), then either \(T(p) = p\) or \(T(T(p)) = p\) but \(T(p) \neq p\).

If \(T(p) \neq p\), then \(q = T(p)\) satisfies \(T(q) = T(T(p)) = p \neq q\), and \(T(T(q)) = T(p) = q\), so \(q\) is also a fixed point of \(T^2\), contradicting uniqueness.

Therefore, \(T(p) = p\), meaning \(p = \cos(p)\), and this is the unique solution to the equation.

7. Quiz Section

Question 1: What is the limit of \(\frac{\sin(3x)}{x}\) as \(x \to 0\)?
a) 0
b) 1
c) 3
d) Undefined

Answer: c) 3

We can rewrite the expression as:

\(\frac{\sin(3x)}{x} = 3 \cdot \frac{\sin(3x)}{3x}\)

Using the well-known limit \(\lim_{y \to 0} \frac{\sin y}{y} = 1\), with \(y = 3x\):

\(\lim_{x \to 0} \frac{\sin(3x)}{x} = 3 \cdot \lim_{x \to 0} \frac{\sin(3x)}{3x} = 3 \cdot 1 = 3\)

Question 2: Which of the following series converges?
a) \(\sum_{n=1}^{\infty} \frac{1}{n}\)
b) \(\sum_{n=1}^{\infty} \frac{1}{\sqrt{n}}\)
c) \(\sum_{n=1}^{\infty} \frac{1}{n^2}\)
d) \(\sum_{n=1}^{\infty} \frac{n}{n+1}\)

Answer: c) \(\sum_{n=1}^{\infty} \frac{1}{n^2}\)

a) This is the harmonic series, which diverges.

b) This is the \(p\)-series with \(p = 1/2 < 1\), which diverges.

c) This is the \(p\)-series with \(p = 2 > 1\), which converges. It's actually \(\frac{\pi^2}{6}\).

d) \(\frac{n}{n+1} = 1 - \frac{1}{n+1}\), so the series is \(\sum_{n=1}^{\infty} (1 - \frac{1}{n+1})\), which diverges.

Question 3: Which of the following sets is compact in \(\mathbb{R}^2\) with the Euclidean metric?
a) \(\{(x,y) : x^2 + y^2 < 1\}\)
b) \(\{(x,y) : x^2 + y^2 \leq 1\}\)
c) \(\{(x,y) : x^2 + y^2 > 1\}\)
d) \(\{(x,y) : xy = 1\}\)

Answer: b) \(\{(x,y) : x^2 + y^2 \leq 1\}\)

By the Heine-Borel theorem, a set in \(\mathbb{R}^2\) is compact if and only if it is closed and bounded.

a) This is the open unit disk, which is bounded but not closed.

b) This is the closed unit disk, which is both closed and bounded, hence compact.

c) This is the exterior of the closed unit disk, which is closed but not bounded.

d) This is the hyperbola, which is closed but not bounded.

Question 4: If \(f\) is differentiable on \([0,1]\) with \(f(0) = 2\) and \(f'(x) \leq 3\) for all \(x \in [0,1]\), what is the maximum possible value of \(f(1)\)?
a) 2
b) 3
c) 5
d) 6

Answer: c) 5

Using the Mean Value Theorem, we know that:

\(f(1) - f(0) = f'(\xi) \cdot (1 - 0)\) for some \(\xi \in (0,1)\)

Since \(f'(x) \leq 3\) for all \(x\), we have \(f(1) - f(0) \leq 3\)

This gives \(f(1) - 2 \leq 3\), so \(f(1) \leq 5\)

The maximum value of \(f(1)\) is achieved when \(f'(x) = 3\) for all \(x \in [0,1]\), giving \(f(1) = 5\).

Question 5: What is the sum of the series \(\sum_{n=0}^{\infty} \frac{2^n}{3^{n+1}}\)?
a) \(\frac{1}{3}\)
b) \(\frac{1}{2}\)
c) \(\frac{2}{3}\)
d) \(1\)

Answer: c) \(\frac{2}{3}\)

This is a geometric series with first term \(a = \frac{1}{3}\) and common ratio \(r = \frac{2}{3}\).

\(\sum_{n=0}^{\infty} \frac{2^n}{3^{n+1}} = \sum_{n=0}^{\infty} \frac{1}{3} \left(\frac{2}{3}\right)^n\)

Since \(|r| = \left|\frac{2}{3}\right| < 1\), the series converges, and the sum is:

\(\sum_{n=0}^{\infty} \frac{1}{3} \left(\frac{2}{3}\right)^n = \frac{1}{3} \cdot \frac{1}{1-\frac{2}{3}} = \frac{1}{3} \cdot \frac{3}{1} = 1\)

Wait, let me recalculate:

\(\sum_{n=0}^{\infty} \frac{2^n}{3^{n+1}} = \frac{1}{3} \sum_{n=0}^{\infty} \left(\frac{2}{3}\right)^n = \frac{1}{3} \cdot \frac{1}{1-\frac{2}{3}} = \frac{1}{3} \cdot \frac{1}{\frac{1}{3}} = \frac{1}{3} \cdot 3 = 1\)

Correct answer is \(1\), which corresponds to option d).

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *