This example builds on adaptive covariance MCMC, and shows you how to plot the autocorrelation of the MCMC samples.
Inference plots:
In [1]:
from __future__ import print_function
import pints
import pints.toy as toy
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 100)
org_values = model.simulate(real_parameters, times)
# Add noise
noise = 50
values = org_values + np.random.normal(0, noise, org_values.shape)
real_parameters = np.array(real_parameters + [noise])
# Get properties of the noise sample
noise_sample_mean = np.mean(values - org_values)
noise_sample_std = np.std(values - org_values)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform log-prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, noise*0.1],
[0.02, 600, noise*100]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Perform sampling using MCMC, with a single chain
x0 = real_parameters * 1.1
mcmc = pints.MCMCController(log_posterior, 1, [x0])
mcmc.set_max_iterations(6000)
mcmc.set_log_to_screen(False)
The autocorrelation in a Markov chain indicates how much each sample in the chain differs from the next. Checking for (lack of) autocorrelation is an easy way to check if your MCMC routine is converging. It can easily be plotted using pints.plot's autocorrelation method.
In [2]:
print('Running...')
chains = mcmc.run()
print('Done!')
In [3]:
# Select chain 0 and discard warm-up
chain = chains[0]
chain = chain[3000:]
In [4]:
import pints.plot
pints.plot.autocorrelation(chain, max_lags=10)
plt.show()