Problem 1

Given: $\Lambda$ is a poisson process on $\mathbb{R}$. Each point $x_i$ in $\Lambda$ underoges some markging process' resulting in $N=\{(x_1,U_1), (x_2,U_2) \dots (x_n, U_n)\}$ where $U_i$ are iid generated from some process witg density $\mu$

This essentially is a generalisation of the Colouring theorem. Let the joint 'marked' distribution $\Lambda'$ be denoted by: $f(x,y)$ where $x\ \in \ \Lambda$ and $y\ \in U$ and $(x,y) \in \Lambda'$ which is in $\mathbb{R}^2$ Consider $F = \sum_{x_i\ \in\ \Lambda}f(x_i,y_i)$

Fact 1: $\Lambda' = \{(X,U)|X\in \Lambda\}$ is an independent process. (This is only true conditional on $X$)

Thus, $E[e^{-sF}|X \in \Lambda] = \Pi_{X_i \in \Lambda} f(X_i, U_i) = \Pi_{X_i \in \Lambda}\int_{U}e^{-sf(X_i,U_i)}p(X_i,U_i)dU_i$

Fact 2: In any region $\Lambda_i$, the number of points $N(\Lambda_i) \sim Poisson(\mu_i)$ where $\mu_i = \int_{\Lambda_i} \mu(x)dx$ and hence, consider:

$F = \sum_{i=1}^k f(X_i) = \sum_{i=1}f_iN_i^k$ being $k$ disjoint sets where $f$ is any measurable function. then

$$ \begin{align} E[e^{sF}] &= \Pi_{i=1}^k E[e^{sf_iN_i}]\\ &= \Pi(e^{\lambda_i(e^{fs}-1)})\ \text{where} \lambda_i = \int_{A_i}\lambda(x)dx\\ &= e^{\sum_{i=1}^k \int(e^{fs}-1)\lambda(x)dx}\\ &= e^{\sum_{i=1}^k \int(e^{fs}-1)\lambda(x)dx}\\ &= e^{ \int_{\Lambda}(e^{fs}-1)\lambda(x)dx}\\ \end{align} $$

or $E[e^{-sF}] = exp(-\int_{\Lambda}(1-e^{fs})\lambda(x)dx)$

Now, consider $E[e^{-sF'}]$ where $F'$ is the sum of independent random variable $f(X_i, U_i)$ Conditional on $\Lambda$:

$E[e^{-sF'}|\Lambda] = \Pi_{X_i\in \Lambda}E[e^{-sf(X_i,U_i)}]$

Consider the measurable function $f'(x) = -log(\int_U e^{-f(X,U)}p(X,U)du)$, then:

[Couldn't take it further from here, but the idea should be to define $F^* = \sum f(X,U)$. I was not able to come up with a generating function for this. The idea was to show that it has generating function that for a poisson]


Aliter

[Source Grimmett and Stirzker, 3rd Edition 6.13.5]

Consider $f: R \longrightarrow R^2$ denoting the distribution of points in $\Lambda'$ then for any set $B\subset R^2$, the number of points of f(\Lambda') in B is given by $N_B = |\Lambda' \cup f^{-1}B| \sim Poisson(B)$ Given project disjoint sets $B_i$ in $R^2$, there pre-images in $R$ are also disjoint and

$\Lambda(A) \sim Poisson(\int_A \lambda(x)dx)$

$\Lambda'(B) \sim \Lambda(f^{-1}B) = Poisson(\int_{f^{-1}B}\lambda(x)dx) = Poisson(\int_B \lambda(x)dx\mu(y)dy$

Problem 2

Given Raindrops fall as a $PPP(\lambda drops/cm^2)$ on $R^2 \times [0, \infty)$ and each drop scatters as in a radius of circle $r \sim exp(1/cm)$. To find: Probability density of first drop touching the origin.

$\lambda=1$

Define $U_i$ to be iid $Bernoulli(p(x_k,y_k)$ given by: $$ U_k = \begin{cases} 1 & if \sqrt{x_k^2+y_k^2} \leq r_k\\ 0 & otherwise \end{cases} $$

By coloring theorem $\implies$ $\Lambda'=\{(r_k,x_k,y_k\}\sim PPP(\lambda p(x_k,y_k))$ For the drop to splash the origin with radius '$r$':

Consider $\Lambda^1 = \{(t_i,x_i,y_i,r_i) \in \Lambda: U_k=1\}$ $\tau = min\{t_k: (t_k,x_k,y_k,r_k) \in \Lambda'\}$

Consider $P(t < \tau, R<r)$ = P[no points in $[0,r] \times [0, t] \times R^2$] = $P[\Lambda^1([0,r] \times [0, t] \times R^2)=0]$

$P[\Lambda^1([0,r] \times [0, 1] \times R^2)=0]=exp(-\int_{0}^r \int_{0}^{1} \int_{R^2}\lambda p(x,y)dxdydtdr) =exp(-\lambda 2\pi tr)$

Now, $P(R<r)=\int P(t,R)dt = exp(-2\pi\lambda r)$

Thus, $R \sim exponential({2\pi\lambda})$

$ER=\int_0^{\infty} re^{-2\pi\lambda r}dr=\frac{1}{2\pi} = 0.15$


In [1]:
### Simulation
%matplotlib inline
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
import math
N=1000
s=0
def R(x,y):
    return math.sqrt(x*x+y*y)
for i in range(N):
    r=-100
    y=0
    x=0
    while R(x,y)>r:
        S=np.random.uniform(size=2)
        x=S[0]
        y=S[1]
        r=np.random.exponential(1)
        s+=r
print 'Average radius: {}'.format(s/N)


Average radius: 2.02946448281

The simulation results do not seem to be close to the expected results of 0.15

Problem 3

Part (a)

In order to simulate the continuous time MC, we make use of the instantaneous rate matrix $Q$ given by: $Q=\lambda(P-I)$ where $\lambda=10^6$ The coninuous time MC is approximated to occur in discreted time steps of $10^{-6}$ seconds

$Q_{aa}$ = Total jump rate out of state a

$Q_{ab}$ = Jum rate from $a \longrightarrow b$

Due to the original transition matrix $P$ having extremely small entries, most of the time is spent in the same state. By approximating the holding time to be poisson, we arrive at the $(e^{tQ})_{ab}$ approximation for $P(Y_t=b|Y_0=a)$

Part (b)

$\tau_\dagger = inf\{t \geq 0: Y_t=\dagger\}$ i.e $\tag_\dagger$ is the hitting time and $u(a) = E[\tau_\dagger|Y_0=a]$ defines the mean hitting time.

$u(\dagger)=0$

For $a \neq \dagger$

$u(a) = \text{Hold time in state a} + \sum_b \text{(fractional jump rate from $a$ to $b$)} \times u(b)$

Alternatively:

$u(a) = \frac{1}{-Q_{aa}} + \sum_{b \neq a}(\frac{Q_{ab}}{-Q_{aa}})u(b)$ $\implies$ $-Q_{aa}u(a) = 1 + \sum_{b \neq a} Q_{ab}u(b)$

We thus solve for:

$Q\vec{u}=-\vec{1}$


In [2]:
k_a=2e-6
k_b=2e-6
k_p=5e-6
k_d=1e-5
ll = 1e6
P = np.matrix([[1-k_a-k_b, k_a ,k_b, 0, 0, 0],
               [k_a, 1-k_a-k_b, 0, k_b, 0, 0],
               [k_b, 0, 1-k_a-k_b, k_a, 0, 0],
               [0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0],
               [0, 0, 0, 0, 1-k_d, k_d],
               [0, 0, 0, k_d, 0, 1-k_d]], dtype=np.float64)
Q = ll*(P-np.eye(6))
print(Q)


[[ -4.   2.   2.   0.   0.   0.]
 [  2.  -4.   0.   2.   0.   0.]
 [  2.   0.  -4.   2.   0.   0.]
 [  0.   2.   2.  -9.   5.   0.]
 [  0.   0.   0.   0. -10.  10.]
 [  0.   0.   0.  10.   0. -10.]]

In [3]:
Qd= Q[:-1,:-1]
Qi = np.linalg.pinv(Qd)
u=(np.sum(Qi, axis=1)*-1) 
u=u.tolist()

In [4]:
def h(x):
    s=0
    ht=0
    cc=0
    for i in range(1,10000):
        new_state=x
        while new_state!=5:
            old_state=new_state
            probs = Q[old_state,:]/-Q[old_state,old_state]
            probs=probs.tolist()[0]
            probs[old_state]=0
            qaa = np.random.exponential(-1/Q[old_state,old_state])
            z=np.random.choice(6, 1, p=probs)
            new_state = z[0] #states[z[0]]
            s+=qaa
    return s/10000

Starting state $\phi$


In [5]:
print('From calculation: {}\t From Simulation: {}'.format(u[0][0],h(0)))


From calculation: 1.90000000032	 From Simulation: 1.88676392061

Starting state $\alpha$


In [6]:
print('From calculation: {}\t From Simulation: {}'.format(u[1][0],h(1)))


From calculation: 1.65000000026	 From Simulation: 1.64257724783

Starting state $\beta$


In [7]:
print('From calculation: {}\t From Simulation: {}'.format(u[2][0],h(2)))


From calculation: 1.65000000026	 From Simulation: 1.65435248441

Starting state $\alpha+\beta$


In [8]:
print('From calculation: {}\t From Simulation: {}'.format(u[3][0],h(3)))


From calculation: 0.900000000125	 From Simulation: 0.898660503545

Starting state $pol$


In [9]:
print('From calculation: {}\t From Simulation: {}'.format(u[4][0],h(4)))


From calculation: 0.1	 From Simulation: 0.0986274216