Continous Random variables

  • Recall the PMF for a discrete random variable $X$ for any $x\in \mathbb{Z}$: $p_X(x) = \mathbf{P}[X=x]$
    Then, we can write $$\mathbf{P}[n \le X \le m] = \sum_{i=n}^m p_X(i)$$

Definition:

We say that a random variable $X$ is continuous and has density $f_X$ if the probability $\displaystyle \mathbf{P}[a \le X \le b] = \int_a^b f_X(x) dx$

(This is because of Riemann's sum)

In fact, symbolically: $\displaystyle \mathbf{P}[x\le X\le x+dx] = f_X(x) dx$

This shows that $f_X(x)$ by itself is not a probability, but rather some sort of probability intensity.

Definition:

If $X$ is a random variable, (discrete, continuous, or otherwise), we let $F_X(x) = \mathbf{P}[X\le x]$ for any $x\in \mathbb{R}$.
This $F_X(x)$ is called the cumulative distribution function of $X$.

Note: if $X$ has density $f_X$, then $\displaystyle F_X(x) = \int_{-\infty}^x f_X(x) dx$

This is from the definition od $f_X$ with $a=-\infty$.

Note: In this case, we also have $\displaystyle \frac{dF_X}{dx} f_X(x)$

This follows directly from the so-called "fundamental theorem of Calculus".

Some facts:

  • Also, when $X$ has a density, then $F_X$ is continuous.

  • $\displaystyle \int_{-\infty}^\infty f(x) dx = 1$

  • $f_X(x) \ge 0$

Properties:

  • $F_X$ increases from $0$ to $1$

  • The limits $0$ and $1$ at $-\infty$ and $+\infty$ exists because of "monotonous convergence of probabilities".

  • For $X$ with density $f_X$, the value of $f_X$ at a single point or even a countable collection of points does not matter.

Examples
  • Example 1: (Fundamental): Uniform density on $[0,1]$. $f_X(x) = \left\{\begin{array}{lr}1 & \text{if }x\in[0,1] \\ 0 & \text{otherwise}\end{array}\right.$

  • Example 2: $X$ with density $f_X(x) = \left\{\begin{array}{lr}constant & \text{if }x\in [0,\frac{1}{2}] \\ 0 & \text{otherwise}\end{array}\right.$
    We conclude that $\int_{-\infty}^\infty f_X(x) dx = 1 \ \ \Longrightarrow constant = 2$

  • Example 3: $X$ with density $f_X(x) = \left\{\begin{array}{lr}constant & \text{if }x\in [a,b] \\ 0 & \text{otherwise}\end{array}\right.$
    Then the constant is $\displaystyle constant = \frac{1}{b-a}$

  • Example 4: $X$ with density $\displaystyle f_X(x) = \left\{\begin{array}{lr}\displaystyle \frac{1}{2}x & \text{if }x\in [0,2] \\ 0 & \text{otherwise}\end{array}\right.$

  • Example 5: (fundamental) $X$ with density $\displaystyle f_X(x) = \left\{\begin{array}{lr}\displaystyle \lambda e^{-\lambda x} & \text{for }x > 0 \\ 0 & \text{otherwise}\end{array}\right.$
    $\lambda>0$

    Note: the only difference between above density and Laplacians density is that Laplace has $|x|$ which makes it symmetric to $x$, and as a result, it also needs $\frac{1}{2}$ coefficient.

    Let;s compute CDF of $X$: $$F_X(x) = \int_{-\infty}^x f_X(y)dy \\ f_X\text{ is }0 \text{ if }y < 0 \\ \int_0^x f_X(y) dy = \int_0^x \lambda e^{-\lambda y} dy = \left. -e^{-\lambda}\right]_0^x \\ = -e^{-\lambda x} + e^0 = 1 - e^{-\lambda x}$$

So we have proved that $F_X(x) = 1 - e^{-\lambda x}$ for $x\ge 0$

Waiting time interpretation

Given that we have already waited $a$ times for an event to happen. What is the probability that we should wait extra $h$ longer time for the event to happen?

$$\mathbf{P}[X > a + h | X > a] = \frac{\mathbf{P}[X > a+h \ and \ X>a]}{\mathbf{P}[X>a]} \\ = \frac{\mathbf{P}[X > a+h]}{\mathbf{P}[X>a]} \\ =\displaystyle \frac{e^{-\lambda (a+h)}}{e^{-\lambda (a)}} = e^{-\lambda (h)}$$

The results does not depend on "a". This means the exponential function has the "memoryless" property.

Comparison with Geometric Distribution

Recall: Geometric distribution was for a random variable describing how long we should wait to get the first success.

Let $Y\sim Geom(p)$, then the "survival property" $\mathbf{P}[Y > k]$ where $k$ is integer. $\displaystyle \mathbf{P}[Y>k]= \mathbf{P}[0000000\text{ (k times)}] = (1-p)^k=\displaystyle e^{\displaystyle -k\ln \displaystyle (\frac{1}{1-p})}$

Compare with $\mathbf{P}[X > a] = e^{-\lambda a}$.

We see that this is actually the same formula: the value $\displaystyle \ln(\frac{1}{1-p})$ plays the role of $\lambda$ and therefore, $Y$ also has the memoryless property.

Remark: Essentially, if a continuous random variable $X$ has the memoryless property, then $X$ is exponential. Same thing with $Y$ discrete.

Other properties

  • Symmetry: when $X$ and $-X$ have the same CDF (same CDF, not symmetric CDF)
    • Example: let $Y$ and $Y'$ be $\sim Expn(\lambda)$ and $Y$ and $Y'$ are independent. Then, define $X=Y-Y'$ and $\tilde(X) = Y' - Y$. Clearly, $X$ and $\tilde{X}$ have the same distribution, and $X$ is Symmetric. Note: When $X$ is symmetric, with a density $f_X(x) = f_X(-x) \text{ "f_X is even"}$.

Quantiles

Definition:

Let $\alpha \in [0,1]$. Let $X$ be a random variable. Then, there exists a value $x_\alpha$ such that $mathbf{P}[[X \le x_\alpha] = \alpha$. We say $x_\alpha$ is the $\alpha$th quantile of $X$.

Linear Transformation

Let $X$ have density $f_X$. Let $a,b$ be constants. Let $Y=aX+b$, then $Y$ has a density like this: $\displaystyle f_Y(y) = f_X(\displaystyle \frac{x-b}{a})\times \frac{1}{a}$

It can be proven as follows: $F_Y(y) = \mathbf{P}[aX+b \le y] = \mathbf{P}[\displaystyle X\le \frac{y-b}{a}] \Rightarrow \ \text{ by definition } = F_X(\displaystyle \frac{y-b}{a})$

Therefore, $f_Y(y) = \frac{dF_Y}{dy} = \frac{1}{a} F'_X(\frac{y-b}{a}) = \frac{1}{a} f_X(\frac{y-b}{a})$

Expectations

Let $X$ has density $f_X$. Then, $\displaystyle \mathbf{E}[X] = \displaystyle \int_{-\infty}^{+\infty} x f_X(x) dx$

Same idea as discrete case.

Example:

$X \sim Unif[0,1]$. Then $\mathbf{E}[X] = \int_0^1 1\times x dx = \frac{1}{2}x^2 = \frac{1}{2}\times(1 - 0) = \frac{1}{2}$

Property: Linearity works.

Example:

Let $Y\sim Unif[a,b]$. We claim that $Y=(b-a)X + a$ where $X\sim Unif[0,1]$.

Therefore, $\mathbf{E}[Y] = (b-a)\times \mathbf{E}[X] + a = \frac{a+b}{2}$

We have used the linearity of expectations.

Definition:

Let $X$ have density $f_X$, and let $g$ be a function. Then $\mathbf{E}[g(X)] =\displaystyle \int_{-\infty}^{\infty} g(x) f_X(x) dx$

Definition: variance

$$\mathbf{Var}[X] = \mathbf{E}[\displaystyle (X-\mathbf{E}[X])^2]$$

Property

$$\mathbf{Var}[aX + v] = a^2 \mathbf{Var}[X]$$
Example:

Let $Y\sim Uni[0,1]$. What is variance of $Y$?

Recall that $\mathbf{E}[Y] = \frac{a+b}{2}$ and $\mathbf{Var}[Y] = \mathbf{Var}[(b-a)X + a] = (b-a)^2 \mathbf{Var}[X]$.

Now, $$\mathbf{Var}[X] = \mathbf{E}[X^2] - (\mathbf{E}[X])^2 $$

$\mathbf{E}[X^2] = \int_0^1 x^2 dx = \frac{1}{3}$

So, $\mathbf{Var}[X] = \frac{1}{3} - (\frac{1}{2})^2 = \frac{1}{12}$

Finally, $\mathbf{Var}[Y] = (b-a)^2 \mathbf{Var}[X] = \frac{(b-a)^2}{12}$

Compare this with the formula in the discrete case $\{1,2,...N\}$, $\mathbf{Var}[] = \frac{N(N-1)}{12}$

Review: integration by parts

$$\int_a^b u\ dv = \left.uv\right]_a^b - \int_a^b v du $$

where $du$ is the differential of $u$ and $dv$ is the differential of v.

Example:

Let $X\sim Expn(\lambda)$. What is $\mathbf{E}[X]$?

$\mathbf{E}[X] = \int_0^{+\infty}\lambda e^{-\lambda x} \ x\ dx $

Solve the integral by "integration by parts":

  • $u = x \Rightarrow du = dx$ and
  • $dv = \lambda e^{-\lambda x} dx \Rightarrow v = - e^{-\lambda x} dx$
$$\int_0^{\infty} x \lambda e^{-\lambda x} dx = \left.-x e^{-\lambda x} dx\right]_0^\infty - \int_0^\infty -e^{-\lambda x} dx = \\ \left.\frac{1}{\lambda} e^{-\lambda x}\right]_0^\infty = \frac{1}{\lambda}$$

We proved that for $X\sim Expon(\lambda)$, then $\mathbf{E}[X] =\displaystyle \frac{1}{\lambda}$

This matches the idea that since the average number of arrivals of a Poisson process with parameter $\lambda$ in an interval of time 1, is $=\lambda$, it might be totally true and very satisfying that the average wait for any event is $=\frac{1}{\lambda}$

Moments of $Expon(\lambda)$

$$X\sim Expon(\lambda) \ \ \ \ \Longrightarrow \ \ \ \ \ \mathbf{E}[X^p] = \frac{p!}{\lambda ^p}$$

Using the moments relation for exponential random variable, we can easily find the variance:

  • $p=2: \ \ \ \mathbf{E}[X^2] = \frac{2}{\lambda^2}$
  • $\mathbf{Var}[X] = \mathbf{E}[X^2] - (\mathbf{E}[X])^2 = \displaystyle \frac{2}{\lambda^2} - \frac{1}{\lambda^2} = \frac{1}{\lambda^2}$

Relationship between Poisson and Exponential

Let $X\sim Exp(\lambda)$, then survival fimction $\mathbf{P}[X\ge x] = \displaystyle e^{\displaystyle -\lambda x}$.

$X$ is interpreted as a waiting time. $X$ is memory-less; the key-property of exponential random variables.

The rest of the story is that the inter-arrival times of a Poisson process are independent of each other, and are exponentially distributed.

In fact, there is an equivalence between saying that events arrive according to such i.i.d. exponentially distributed inter-arrival times and saying that the arrival times are the jump times of a Poisson process.

Jelly-Donut Problem: a mixed distribution

Some distributions are mixed, with a density and a discrete part. For example, we can define

$$X = \left\{\begin{array}{lcr} Y & if & Y\le B\\B & & otherwise\end{array}\right.$$

where $Y$ is a random variable with density and $B$ is not random. There $X$ has a density below $B$ and $B$ is an ... for $X$.

Survival function

Let $X$ be a random variable with density $f_X(x)$ and CDF $F_X(x), then surviaval function of $\mathbf{P}[X > x] = 1 - F_X(x)$

For exponential random variable $X\sim Exp(\lambda) \Longrightarrow \mathbf{P}[X >x] = 1-F_X(x) = 1 - (1-e^{-\lambda x}) = e^{-\lambda x}$


In [16]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

x = np.arange(0, 6, 0.1)
def CDF(lam, x):
    return 1 - np.exp(-lam * x)
def survival(lam, x):
    return np.exp(-lam*x)

fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(12,4), sharey=True)
ax[0].plot(x, CDF(1, x), 'b-', lw=4)
plt.ylim((0,1.3))
ax[1].plot(x, survival(1, x), 'r-', lw=4)
plt.show()


Transformation of random variables

Let $X\sim Expon(1)$. Let $\lambda>0$ be fixed. Let $Y=\frac{1}{\lambda} X$. Let's find density of $Y$.

By our linear transformation for density: $$f_Y(y) =\displaystyle \frac{1}{1/\lambda} f_X \left(\frac{y - 0}{1/\lambda}\right) = \lambda e^{-\lambda y}$$

Here, we recognize that $Y \sim Expon(\lambda)$.

Example:

Let $U\sim Unif(0,1)$, then $\displaystyle f_U(u) = \left\{\begin{array}{lrr} 1 & if & u\in [0,1]\\0 & & otherwise\end{array}\right.$

Let $Y=-\ln U$. We want to find density of $Y$.

First of all, $Y > 0$.

Let's try to compute the CDF of $Y$: $$F_Y(y) = \mathbf{P}[Y\le y] = \left\{\begin{array}{lcr} 0 & if & y \le 0 \\ \mathbf{P}[\ln U \ge -y] && otherwise\end{array}\right.$$

$$\mathbf{P}[\ln U\ge -y] = \mathbf{P}[U \ge e^{\displaystyle -y}]$$

Note that in order for above to be legitimate, $e^{-y}$ has to be between [0,1]. And in fact it is since $y>0$.

Next, since $U\sim Unif(0,1)$, then for any $ 0 \le a \le 1$: $\mathbf{P}[U \ge a] = 1 - \mathbf{P}[U \le a] = 1 - a$

Therefore, $$\mathbf{P}[U \ge e^{-y}] = 1 - \mathbf{P}[U \le e^{-y}] = 1 - e^{-y}$$ and we recognize that this is infact the CDF of exponential random variable. So, $Y \sim Exp(\lambda = 1)$

Note: simulating random variables $\sim Exp(1)$

We have proved that to simulate a drawe of a random variable which is $Exp(\lambda=1)$, we just nneed to draw a random variable in $(0,1)$ and take the opposite of its $\ln$.

If you want to simulate an exponential random variable, we can generate uniform random numbers, and transform them according to above.

Simulating random variables $\sim Exp(\lambda)$

If $X\sim Exp(\lambda)$ then survival function $\mathbf{P}[X > x] = e^{\displaystyle -\lambda x}$

$$\mathbf{P}[Y < \lambda x] = e^{-\lambda x}$$$$\mathbf{P}[\frac{1}{\lambda} Y > x] = e^{-\lambda x}$$

Let's call $Z=\frac{1}{\lambda}Y$. Then $\mathbf{P}[Z > x] = e^{-\lambda x}$ therefore, we recognize that $Z\sim Exp(\lambda)$

Also, this shows that $\frac{1}{\lambda}$ is a so-called scale paramater for the exponential distribution. In particular, by multiplying an exponential random variable by a number, we get another exponential random variable.

Moreover, generally for any class of random variables with $CDF=F(x;c)$ we say that $c$ is a scale parameter of $F(x;c) = \tilde{F}(\frac{x}{c})$

Note: for densities this translates as $f(x;c) = \frac{1}{c} ~ \tilde{f}(\frac{x}{c})$

Example:

$Exp(\lambda)$ has $\frac{1}{\lambda}$ as a scale parameter because

$$F_X(x) = 1 - e^{\displaystyle -\lambda x} = 1 - e^{\displaystyle -\frac{x}{\frac{1}{\lambda}}} $$

Theorem

Let $X$ have density $f_X$

Let $h$ be a strictly monotone function(either always decreasing or always increasing)

Let $Y=h(X)$ so $Y$ is also a random variable and has a density $f_Y$

$$f_Y(y) = f_X(h^{-1}(y)) \frac{1}{\frac{d~ h }{dx} \left(h^{-1}(y)\right)}$$
Proof:

Recall that $Y$ has a density $f_Y$ f we can write symbolically $\mathbf{P}[Y \in dy] = f_Y(y) dy$

$$\mathbf{P}[Y \in dy] = \mathbf{P}[h(x) \in dy] = \mathbf{P}[X \in h^{-1}(dy)] \\ = \mathbf{P}[X \in~ (\text{interval around the value }h^{-1}(y)\text{ of width }h^{-1}(dy) )] \\ =f_X\left(h^{-1}(y)\right) \times \text{width of that interval} $$

$h^{-1}(y) = \frac{d~x}{dy} dy = \displaystyle \frac{d ~h^{-1}(Y)}{dy} dy = \displaystyle \frac{1}{\displaystyle \frac{d~y}{dx}} dy$

Example of application of the general therom

Let $U\sim Unif(0,1)$. Let $F$ be the CDF of a random variable. Assume that $F$ is stritly inxreasing.

Therefore, $F^{-1}(u)$ is defined for every $u\in(0,1)$. Moreover, $X = F^{-1}(U)$ has $F$ as its CDF.

Joind Densities

A pair of random variables $(X,Y)$ has a density $f_{X,Y}$ if for any intervals $[a,b]$ and $[c,d]$ the probability $$\mathbf{P}\left[X\in [a,b] \& Y \in [c,d]\right] = \int_a^b \left( \displaystyle \int_c^d f_{X,Y} (x,y)~dy \right)dx$$

or sometimes written as $${\int \displaystyle \int}_{[a,b]\times[c,d]} f_{X,Y}(x,y) ~dx ~dy$$

Note: $[a,b]\times [c,d]$ is a rectangle that looks like this

Definition

Let $g$ be a function on $\mathbb{R}\times \mathbb{R}$. Then, $$\mathbf{E}[g(X,Y)] = \displaystyle \int_{-\infty}^\infty \int_{-\infty}^\infty g(x,y) f_{X,Y}(x,y) ~dy~dx$$

Independence

If $X$ and $Y$ are independent, then $\mathbf{E}[XY] = \mathbf{E}[X] \mathbf{E}[Y]$

Equally importantly, if $f_X$ and $f_Y$ are densities for $X$ and $Y$, then the pair $(X,Y)$ has density $$f_{X,Y}(x,y) = f_X(x) f_Y(y)$$

Proof:
$$\mathbf{P}[(X,Y)\in dx~dy] = \mathbf{P}[X\in dx \ \&\ Y\in dy] \\ = \mathbf{P}[X\in dx] \mathbf{P}[Y\in dy]= f_X(x)~dx\ \times\ f_Y(y)~dy = f_X(x)f_Y(y)~dx~dy $$
Example

Let $(X,Y)$ have density $f_{x,Y}(x,y) = \left\{\begin{array}{lrr}15 e^{-5x-3y} & if & x<0 and y>0\\ 0 && otherwise\end{array}\right.$

What is the distribution of $(X,Y)$?

$$15 e^{-5x-3y} = 5e^{-5x} \times 3e^{-3y} \\ 5e^{-5x}\sim Exp(5) \ \ \ \ 3e^{-3y}\sim Exp(3)$$

Since $X$ and $Y$ have zero density on negative values, this is really the product of these densities. Therefore, $x\&Y$ are indepedent, and $X\simExp(5)$ and $Y\sim Exp(3)$

Example

Let $(X,Y)$ have density $f_{X,Y} = \left\{\begin{array}{lrr} 8xy & for & 0\le x \le y \le 1 \\ 0 &&otherwise\end{array}\right.$

Note: from the condition $0\le x \le y \le 1 \\ 0$ we can get that $X \le Y$. So there is a relation between them, therefore we are pretty sure that $X\&Y$ are NOT independent because of the order between them.

Note, we should check that $\displaystyle \int_{-\infty}^\infty \int_{-\infty}^\infty f_{X,Y}(x,y) ~dx ~dy = 1 $

$\displaystyle \int_{-\infty}^\infty \int_{-\infty}^\infty f_{X,Y}(x,y) ~dx ~dy = \displaystyle \int_0^1 \left(\int_0^y f_{X,Y}(x,y) ~dx \right)~dy = 1 $


Example

With the same $X\&Y$ as given above, let $T=X+Y$ find probability $\mathbf{P}[T>1]$?

since we want $T>1$, and we know that $X<Y$, so the minimum possible value for $Y$ is $0.5$,

$$\mathbf{P}[X+Y > 1] = \int_\frac{1}{2}^1 \int_{\text{over all possible values of }x\text{ when }y\text{ is fixed}} 8xy ~dx ~dy \\= \mathbf{P}[X > 1 - Y] = \int_\frac{1}{2}^1 \int_{1-y}^1 8xy ~dx ~dy$$

when $1-y < y$? This occurs if $y>\frac{1}{2}$ and it is already satisfied from the outer interal.

So, therefore, $$\mathbf{P}[T>1] = \int_\frac{1}{2}^1 \int_{1-y}^1 8xy ~dx ~dy \\ = \int_\frac{1}{2}^1 8y \int_{1-y}^1 x ~dx ~dy = \int_\frac{1}{2}^1 8y\left(\frac{1}{2}x^2\right)_{1-y}^y dy $$

$$= \int_\frac{1}{2}^1 \frac{1}{2} 8y\left(y^2 - (1-y)^2\displaystyle \right)_{1-y}^y dy = \int_\frac{1}{2}^1 4y\left(y^2 - y^2 +2y - 1\right) dy $$$$\\ = \int_\frac{1}{2}^1 8y^2 - 4y dy = \left(\frac{8}{3}y^3 - 2y^2\right)_\frac{1}{2}^1 = ..$$

Definition

If $(X,Y)$ has a density $f_{X,Y}$, then, we say that the densities of $X\&Y$ separately are the arginals of $X,Y$

Theorem

In this case, $f_X(x) = \displaystyle \int_{-\infty}^\infty f_{X,Y}(x,y) dy$
Similarly for $Y$.

Example:

Let $f_{X,Y} = \left\{\begin{array}{lrr}\displaystyle 2e^{\displaystyle -x-y}&for& 0\le y <x <\infty \\ 0 &&otherwise\end{array}\right.$

Let's find $f_X(x)=?$

$$f_X(x) = \int_{-\infty}^\infty f_{X,Y}(x,y) dy = \int_0^x 2e^{-x-y} dy = 2e^{-x} \int_0^x e^{-y}dy = 2e^{-x} \left(-e^{-y}\right)_0^x = 2e^{-x}\left(1 - e^{-x}\right) \ \ \text{for }x\ge 0$$

Let's find $f_Y(y)=?$

$$f_Y(y) = 2\int_y^\infty 2e^{-x-y} dx = 2e^{-y} \int_y^\infty e^{-x}dx = 2e^{-y} \left(e^{-x}\right)_y^\infty = 2e^{-y}\left(0 + e^{-y}\right) = 2e^{-2y} \sim Exp(2)$$

With this example, let $T=X+Y$. Does $T$ have a density?

Let's find the CDF $F_T$ and look for $\frac{d~F_T}{dt}$. We compute for $t>0$.

For $t>0$: $$F_T(t) = \mathbf{P}[X+Y \le 1] = \int_?^? \left(\int_?^? dx\right) dy$$

We have to find the limits on the integrals. We start by $T=X+Y \le t$, therefore, $Y \le X \le t - Y \Longrightarrow Y \le t - Y$ and we get $Y \le \frac{t}{2}$.

$$\Longrightarrow \ \int_0^\frac{t}{2} \left(\int_y^{t-y} 2e^{-x-y} ~dx \right) dy = \int_0^\frac{t}{2} 2e^{-y}\left( \int_y^{t-y} e^{-x-y} ~dx \right) dy \\ = \int_0^\frac{t}{2} 2e^{-y} \left(-e^{-x}\right)_y^{t-y} dy = \int_0^\frac{t}{2} 2e^{-y}\left(-e^{y-t} + e^{-y} \right) dy = \int_0^\frac{t}{2} -2e^{-t} +2 e^{-2y} ~dy = \left(-2e^{-t}y - e^{-2y}\right)_0^\frac{t}{2} \\= -2~\frac{t}{2}~e^{-t} - e^{-2\times \frac{2}{t}} + 1 = 1 - e^{-t} - te^{-t}$$

So, we can see that the result is differentiable, which means that the derivative of $F_T(t)$ exists. So,, density of $T$ exists and $$f_T(t) = \frac{d~F_T(t)}{dt} = \left\{\begin{array}{lrr}t~e^{-t} & for & t\ge 0\\ 0 && otherwise\end{array}\right.$$

Definition

The CDF of $(X,Y)$ is $$F_{X,Y}(x,y) = \mathbf{P}[X \le x \text{ AND } Y\le y]$$

Theorem from the previous definition

If $\displaystyle \frac{\partial^2 ~F_{X,Y}}{\partial x ~ \partial y}$ exists, then, it is the density of $(X,Y)$: $f_{X,Y}(x,y) = \displaystyle \frac{\partial^2 ~F_{X,Y}}{\partial x ~ \partial y}$

Order Statistics

Let $X_1, X_2, ..., X_n$ be random variables. If we order them in increasing order, then $X_{n,1} \le X_{n,2} \le X_{n,2} \le ... \le X_{n,n}$. These $X_{n,1},X_{n,2},... , X_{n,n}$ are the order-statistics of $X_1, X_2, ..., X_n$.

Note:

  • $X_{n,1}$ is the minimum of those random variables: $X_{n,1} = \min (X_i's)$
  • $X_{n,n}$ is the maximum of those random variables: $X_{n,n} = \max (X_i's)$
Example:

It's very easy to compute the CDF of $X_{n,n}$.

$$F_{X_{n.n}}(x) = \mathbf{P}[X_{n,n} \le x] \Longrightarrow \text{if max of some numbers is less than something, all those numbers are less than that} \\ F_{X_{n.n}}(x) = \mathbf{P}\left[X_{n,1}\le x~\&~X_{n,2}\le x~\&...~\&X_{n,n}\le x\right] $$

Now, assume that $X_i$'s are independent. As a result, we can write

$$\mathbf{P}\left[X_{n,1}\le x~\&~X_{n,2}\le x~\&...~\&X_{n,n}\le x\right] = \mathbf{P}[X_{n,1}\le x]~\mathbf{P}[X_{n,2}\le x]~...~\mathbf{P}[X_{n,n}\le x] \\ \Longrightarrow = F_{X_1}(x)~F_{X_2}(x)...F_{X_n}(x)~$$
Example:

What about the min?

The CDF does not work for the min as it did for max. But instead, the survival function works:

$$\mathbf{P}[\min(X_1,X_2,..X_n) \ge x] = \mathbf{P}[X_1\ge x~\&X_2\ge x~\&...\&X_n\ge x\&] = \text{with i.i.d. assumption } \\ = \mathbf{P}[X_1\ge x]~\mathbf{P}[X_2\ge x]~...~\mathbf{P}[X_n\ge x]$$
Example:

Let $X_1,X_2, ...X_n$ be indepedent exponential random variables with parameters $\lambda_1,\lambda_2,...,\lambda_n$.

Then, for each $X_i$: $\mathbf{X_i \ge x} = e^{-\lambda_i x}$

The survival function for the min of $X_1,X_2, ...X_n$, will be $$\mathbf{P}[\min(X_1,X_2, ...X_n)\ge x] = \prod_i \mathbf{P}[X_i \ge x] = e^{-\lambda_1 x}~e^{-\lambda_2 x} ... ~e^{-\lambda_n x} = e^{\displaystyle -(\lambda_1 + \lambda_2 + ... +\lambda_n)x}$$

Therefore, $\min(X_1,X_2, ...X_n)\sim \displaystyle Exp(\lambda_1 + \lambda_2 + ... +\lambda_n)$


In [ ]: