Cone distribution

In this notebook we create a cone distribution, which has the following density,

$$p(x) \propto \text{exp}(-|x|^\beta),$$

where $\beta > 0$ and $|x|$ is the Euclidean norm of the d-dimensional $x$. This distribution was created due to its long tails, for its use in testing MCMC algorithms.

Plotting a two-dimensional version of this function with $\beta=1$.


In [1]:
import pints
import pints.toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt

# Create log pdf (default is 2-dimensional with beta=1)
log_pdf = pints.toy.ConeLogPDF()

# Contour plot of pdf
num_points = 100
x = np.linspace(-5, 5, num_points)
y = np.linspace(-5, 5, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.xlabel('x')
plt.ylabel('y')
plt.show()


Generate independent samples from this distribution and plot them


In [2]:
samples = log_pdf.sample(100)

num_points = 100
x = np.linspace(-5, 5, num_points)
y = np.linspace(-5, 5, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.scatter(samples[:, 0], samples[:, 1])
plt.xlabel('x')
plt.ylabel('y')
plt.show()
d = map(lambda x: np.linalg.norm(x), samples)


Use adaptive covariance MCMC to sample from this (un-normalised) pdf.


In [3]:
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([2, 2], [8, 8], size=(4, 2))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)

# Set maximum number of iterations
mcmc.set_max_iterations(4000)

# Disable logging
mcmc.set_log_to_screen(False)

# Number of chains
num_chains = 4

# Run!
print('Running...')
chains = mcmc.run()
print('Done!')

print('R-hat:')
print(pints.rhat_all_params(chains))

# Discard warm-up
chains = [chain[1000:] for chain in chains]


Running...
Done!
R-hat:
[1.000469410924868, 1.0038943555121151]

Scatter plot of the samples. Adaptive covariance MCMC seems to do ok at sampling from this distribution.


In [4]:
stacked = np.vstack(chains)
plt.contour(X, Y, Z, colors='k', alpha=0.5)
plt.scatter(stacked[:,0], stacked[:,1], marker='.', alpha=0.2)
plt.xlim(-5, 5)
plt.ylim(-5, 5)
plt.xlabel('x')
plt.ylabel('y')
plt.show()


4-dimensional cone

Now creating a 4 dimensional cone with $\beta=1$, then using Adaptive covariance MCMC to sample from it.


In [5]:
log_pdf = pints.toy.ConeLogPDF(dimensions=4)

# Create an adaptive covariance MCMC routine
x0 = np.zeros(log_pdf.n_parameters()) + np.random.normal(0, 1, size=(4, log_pdf.n_parameters()))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)

# Set maximum number of iterations
mcmc.set_max_iterations(8000)

# Disable logging
mcmc.set_log_to_screen(False)

# Number of chains
num_chains = 4

# Run!
print('Running...')
chains = mcmc.run()
print('Done!')

print('R-hat:')
print(pints.rhat_all_params(chains))

# Discard warm-up
chains = [chain[1000:] for chain in chains]


Running...
Done!
R-hat:
[1.000464868736082, 1.0018577353691955, 1.0015143943457883, 1.0024464699295452]

Compare the theoretical mean and variance of the normed distance from the origin with the sample-based estimates -- MCMC tends to understate these since it struggles to reach tails


In [6]:
chain = np.vstack(chains)
d = list(map(lambda x: np.linalg.norm(x), chain))
a_mean = np.mean(d)
a_var = np.var(d)

print("True normed mean = " + str(log_pdf.mean_normed()))
print("Sample normed mean = " + str(a_mean))

print("True normed var = " + str(log_pdf.var_normed()))
print("Sample normed var = " + str(a_var))


True normed mean = 4.0
Sample normed mean = 3.82313849099
True normed var = 4.0
Sample normed var = 3.42825835582

Longer tailed cone

Create a cone with $\beta=0.4$.


In [7]:
# Create log pdf
log_pdf = pints.toy.ConeLogPDF(dimensions=2, beta=0.4)

# Contour plot of pdf
num_points = 100
x = np.linspace(-15, 15, num_points)
y = np.linspace(-15, 15, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.xlabel('x')
plt.ylabel('y')
plt.show()


Adaptive covariance MCMC finds it harder to efficiently sample from this distribution.


In [8]:
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([2, 2], [8, 8], size=(4, 2))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)

# Set maximum number of iterations
mcmc.set_max_iterations(4000)

# Disable logging
mcmc.set_log_to_screen(False)

# Number of chains
num_chains = 4

# Run!
print('Running...')
chains = mcmc.run()
print('Done!')

print('R-hat:')
print(pints.rhat_all_params(chains))

# Discard warm-up
chains = [chain[1000:] for chain in chains]
stacked = np.vstack(chains)

plt.contour(X, Y, Z, colors='k', alpha=0.5)
plt.scatter(stacked[:,0], stacked[:,1], marker='.', alpha=0.2)
plt.xlim(-15, 15)
plt.ylim(-15, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()

d = list(map(lambda x: np.linalg.norm(x), stacked))
a_mean = np.mean(d)
a_var = np.var(d)

print("True normed mean = " + str(log_pdf.mean_normed()))
print("Sample normed mean = " + str(a_mean))

print("True normed var = " + str(log_pdf.var_normed()))
print("Sample normed var = " + str(a_var))

# Show traces and histograms
pints.plot.trace(chains)
plt.show()


Running...
Done!
R-hat:
[1.0002227696281398, 1.0029440160064986]
True normed mean = 77.9689294082
Sample normed mean = 70.5758221679
True normed var = 9040.84604693
Sample normed var = 5163.50623452

Try Hamiltonian Monte Carlo on this problem. Chains mix with themselves much better than for adaptive covariance although also struggles to efficiently explore the tails.


In [10]:
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([2, 2], [8, 8], size=(4, 2))
sigma0 = np.repeat(10, 4)
mcmc = pints.MCMCSampling(log_pdf, 4, x0, method=pints.HamiltonianMCMC, sigma0=sigma0)

# Set maximum number of iterations
mcmc.set_max_iterations(4000)

# Disable logging
mcmc.set_log_to_screen(False)

# Number of chains
num_chains = 4

# Run!
print('Running...')
chains = mcmc.run()
print('Done!')

print('R-hat:')
print(pints.rhat_all_params(chains))

# Discard warm-up
chains = [chain[1000:] for chain in chains]
stacked = np.vstack(chains)

plt.contour(X, Y, Z, colors='k', alpha=0.5)
plt.scatter(stacked[:,0], stacked[:,1], marker='.', alpha=0.2)
plt.xlim(-15, 15)
plt.ylim(-15, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()

d = list(map(lambda x: np.linalg.norm(x), stacked))
a_mean = np.mean(d)
a_var = np.var(d)

print("True normed mean = " + str(log_pdf.mean_normed()))
print("Sample normed mean = " + str(a_mean))

print("True normed var = " + str(log_pdf.var_normed()))
print("Sample normed var = " + str(a_var))

# Show traces and histograms
pints.plot.trace(chains)
plt.show()


Running...
Done!
R-hat:
[1.0065124809364008, 1.0009626859394338]
True normed mean = 77.9689294082
Sample normed mean = 72.0013725909
True normed var = 9040.84604693
Sample normed var = 5443.66792626