Convergence Types

Let $X_n$ be a sequence of random variables on a given probability space.

(a) Definition: (almost-sure convergence)

As $n\to \infty$: $$X_n \to X \ \ \ \ \text{ (almost surely)}$$ if $X$ is a random variable pn the sae probability space and $\mathbf{P}[X_n \to_{n\to \infty} X] = 1$

(b) Convergence in Probability

With probability $1$ asymptotically, $X_n$ is arbitrarily close to $X$.

In other words, $\forall \epsilon>0$, $\displaystyle \mathbf{P}[|X_n-X|\le \epsilon]\displaystyle \to_{n\to \infty} 1$

(c) Convergence in Distribution

Here, the $X_n$'s and $X$ do not need to share any probability space. We are only concerned with their CDFs: $F_n$ and $F$.

We say $X_n \to X$ in distribution, if $\forall x ~:~ F_n(x) \to_{n\to \infty} F(x)$

  • Small caveat: if $F$ is not a continuous funciton, then for any point $x_0$ where $F$ is not continuous, $F_n(x_0)$ does not need to converge to $F(x_0)$
Example:

If $X_n \to \text{ constant }C$, if and only if $\displaystyle \left\{\begin{array}{lrr}F_n(x) \to 0 & for & x<C\\F_n(x) \to 1 & for & x > C\end{array}\right.$

(d) Convergence in $L^p(\Omega)$

  • Note: $L^p(\Omega)$ is the set of all random variables $X$ on $\Omega$ such that $\mathbf{E}[|X|^p] < \infty$

We say $\displaystyle X_n \to^{\text{in }L^p}_{n\to \infty}\to X$ if $\mathbf{E}[|X_n - X|^p] \to 0$

  • What is the convergence in mean squared? (answer): when $p=2$, it is $L^2(\Omega)-$convergence

Interesting Properties

  • Which convergences are weaker, which ones are stronger?

    • Using Chebyshev's inequality, we can prove that $(d) \text{ implies }\Rightarrow (b)$

    • also not hard to see $(a) \Rightarrow (b) \Rightarrow (c)$

    • In general, $(a)$ and $(d)$ are not comparable.

    • However, under additional assumptions, we can compare them. For example:

      • (Dominated Convergence Theorem)
        Let $X_n$ be such that $\exists \text{random variable} Y \text{with} |X_n|\le Y \forall n \text{ and } \mathbf{E}[Y]<\infty$.
        If $X_n \to X$ almost surely, then $X_n \to^{L^1} X$, i.e., under assumption above: $(d) \Rightarrow (a)$

      There are also classes of sequences where almost surely .............

      • Note:
    • $(c) \Rightarrow (b)$ when $X=constant$
      Therefore, when the limit is constant, $(b)$ and $(c)$ are equivalent.

Theorem:

Recall the notation $M_X$ for the moment generating function (MGF) of $X$.

If $M_{X_n}(t) \to M_X(t)$ for $t$ in neighborhood of $0$, then $X_n\to_{n\to \infty} X$ in distribution.

Example:

Let $X_n=\text{# successes in }n\text{ Bernoulli trials}$ with success probability $\displaystyle p=\frac{\lambda}{n}$.

Therefore, $\displaystyle X_n\sim Binom(n, \frac{\lambda}{n})$

Therefore, $\displaystyle M_{X_n}(t) = \left(1 - p +pe^t\right)^n = \left(1 + \frac{\lambda}{n} (e^t-1)\right)^n$

Then, use th efollowing fact: $\displaystyle \lim_{n\to \infty} (1 + \frac{x}{n})^n = e^x$

Therefore, with $x = \lambda (e^t-1)$ we get $\displaystyle M_{X_n}(t) \to_{n\to \infty}\to e^\lambda(e^t - 1)$

We recognize that this is MGF of $Poisson(\lambda)$ distribution.

Example

Let $X_n\sim Unif(0,frac{1}{n})$. Then, $F_{X_n} \to F_{\text{the constant 0}}$ i.e., $\displaystyle \left\{\begin{array}{lrr}F_{X_n}(x) \to 0 & for & x<0\\F_{X_n}(x) \to 1 & for & x > 0\end{array}\right.$

Example:

Let $X_i$ for $i=1,...n$ be iid $\sim U(0,1)$.
Let $M_n = \max(X_i)$
Let $m_n = \min(X_i)$

Then $M_n \to 1$ in distribution, and $m_n \to 0$

  • Note: since they converge to a constant in distrinution, $(c)$ and $(b)$ are equivalent. So they also converge in probability.

Theorem (fun little trick)

If $\mathbf{E}[X_n] \to_{n\to \infty} \text{a constant }C$ and $\mathbf{Var}[X_n]\to_{n\to \infty} 0$ then $X_n \to_{n\to \infty} C$ in probability.

Theorem:: Chebyshev's Weak Law of Large Numbers:

Let $X_1, X_2, ..X_n$ be random variables with equal mean $\mu$ and variance $\sigma^2$, and $0$ covariance.

Let $\bar{X_n} = \frac{X_1 + X_2 + ... + X_n}{n}$

Then, $\bar{X_n} \to_{n\to \infty} \mu$ in probability.

Proof: we proved this before using Chebyshev's inequality.

Theorem:: Khinchine's Weak Law of Large Numbers

We don't need $\sigma^2$ to exists, but we assume $X_i$'s are iid. Then, same conclusion as previous theorem.

Proof: see proof in Feller's 1950 probability textbook.


Example:

Let $X_n\sim Unif \{1,2,3,...,n\}$

Note that this is easy to see $F_n(x) = \frac{[x]}{n} \to 0$ for all $x$.

This is contrast to the definition of CDF.

So, this $X_n$ does not converge in distribution.

Example:

Let $U_i\sim Unif(0,1)$ and iid.

Let $M_n = \max(U_i :~ i=1,2,..,n)$

We know that $Un\to 1$ in probability.

So that mean $1 - M_n \to 0$

Let $Y_n = (1-M_n) \times n$

We compute

$$F_{Y_n}(x) = 1 - F_n(1 - \frac{y}{n})^n = 1 - (1-\frac{y}{n})^n = 1 - e^y \Longrightarrow \text{CDF of }Exp(\lambda=1)$$

In [ ]: