This notebook is for playing with different MCMC algorithms, given a few difficult posterior distributions, below.
Choose one of the speed-ups from the Efficient Monte Carlo Sampling cgunk (something that sounds implementable in a reasonable time) and modify/replace the sampler below to use it on one or more of the given PDFs. Compare performance (burn-in length, acceptance rate, etc.) with simple Metropolis. (Apart from the Gaussian PDF, chances are that nothing you do will work brilliantly, but you should see a difference!)
For Gibbs sampling, interpret the Gaussian as a likelihood function and put some thought into how you define your priors. Otherwise, just take the given PDF to be the posterior distribution.
Import things
In [ ]:
import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
import scipy.stats
%matplotlib inline
In [ ]:
def Rosenbrock_lnP(x, y, a=1.0, b=100.0):
if y < 0.0: return -np.inf
return -( (a-x)**2 + b*(y-x**2)**2 )
In [ ]:
def eggbox_lnP(x, y):
return (2.0 + np.cos(0.5*x)*np.cos(0.5*y))**3
In [ ]:
def sphshell_lnP(x, y, s=0.1):
return -(np.sqrt(x**2+y**2) - 1)**2/(2.0*s**2)
This one is not hard to sample using simple methods, but you can use it as a test to make sure things are working. The real idea is to do conjugate Gibbs sampling for a Gaussian likelihood - for which you would not actually make direct use of the function below.
The model parameters are the mean and variance of the Gaussian.
In [ ]:
gaussian_data = np.random.normal(size=(20)) # arbitrary data set
def gaussian_lnP(m, v):
if v <= 0.0: return -np.inf
return -0.5*(m-gaussian_data)^2/v - 0.5*np.log(v)
In [ ]:
def propose(params, width):
return params + width * np.random.randn(params.shape[0])
In [ ]:
def step(current_params, current_lnP, width=1.0):
trial_params = propose(current_params, width=width)
trial_lnP = lnPost(trial_params, x, y)
if np.log(np.random.rand(1)) <= trial_lnP-current_lnP:
return (trial_params, trial_lnP)
else:
return (current_params, current_lnP)
In [ ]:
params = -5.0 + np.random.rand(2) * 10.0
lnP = lnPost(params, x, y)
Nsamples = 1000
samples = np.zeros((Nsamples, 2))
for i in range(Nsamples):
params, lnP = step(params, lnP)
samples[i,:] = params