This example shows how the Fitzhugh-Nagumo simplified action potential (AP) model can be used.
The model is based on a simplification and state-reduction of the original squid axon model by Hodgkind and Huxley. It has two state variables, a voltage-like variable and a recovery variable.
In [1]:
import pints
import pints.toy
import numpy as np
import matplotlib.pyplot as plt
# Create a model
model = pints.toy.FitzhughNagumoModel()
# Run a simulation
parameters = [0.1, 0.5, 3]
times = np.linspace(0, 20, 200)
values = model.simulate(parameters, times)
# Plot the results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Value')
plt.plot(times, values)
plt.legend(['Voltage', 'Recovery'])
plt.show()
With these parameters, the model creates wide AP waveforms that are more reminiscent of muscle cells than neurons.
We now set up a simple optimisation problem with the model.
In [2]:
# First add some noise
sigma = 0.5
noisy = values + np.random.normal(0, sigma, values.shape)
# Plot the results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Noisy values')
plt.plot(times, noisy)
plt.show()
Next, we set up a problem. Because this model has multiple outputs (2), we use a MultiOutputProblem.
In [3]:
problem = pints.MultiOutputProblem(model, times, noisy)
score = pints.SumOfSquaresError(problem)
Finally, we choose a wide set of boundaries and run!
In [4]:
# Select boundaries
boundaries = pints.RectangularBoundaries([0., 0., 0.], [10., 10., 10.])
# Select a starting point
x0 = [1, 1, 1]
# Perform an optimization
found_parameters, found_value = pints.optimise(score, x0, boundaries=boundaries)
print('Score at true solution:')
print(score(parameters))
print('Found solution: True parameters:' )
for k, x in enumerate(found_parameters):
print(pints.strfloat(x) + ' ' + pints.strfloat(parameters[k]))
In [5]:
# Plot the results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Values')
plt.plot(times, noisy, '-', alpha=0.25, label='noisy signal')
plt.plot(times, values, alpha=0.4, lw=5, label='original signal')
plt.plot(times, problem.evaluate(found_parameters), 'k--', label='recovered signal')
plt.legend()
plt.show()
This shows the parameters are not retrieved entirely correctly, but the traces still strongly overlap.
The Fitzhugh-Nagumo model has sensitivities calculated by the forward sensitivities approach, so we can use samplers that use gradients (although they will be slower per iteration; although perhaps not by ESS per second!), like Monomial-gamma HMC.
In [16]:
import time
problem = pints.MultiOutputProblem(model, times, noisy)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0, 0, 0, 0, 0],
[10, 10, 10, 20, 20]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
real_parameters1 = np.array(parameters + [sigma, sigma])
xs = [
real_parameters1 * 1.1,
real_parameters1 * 0.9,
real_parameters1 * 1.15,
real_parameters1 * 1.5,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 4, xs, method=pints.MonomialGammaHamiltonianMCMC)
# Add stopping criterion
mcmc.set_max_iterations(200)
mcmc.set_log_interval(1)
# Run in parallel
mcmc.set_parallel(True)
for sampler in mcmc.samplers():
sampler.set_leapfrog_step_size([0.05, 0.2, 0.2, 0.1, 0.1])
sampler.set_leapfrog_steps(10)
# time start
start = time.time()
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# end time
end = time.time()
time = end - start
In [18]:
import pints.plot
pints.plot.trace(chains)
plt.show()
Print results.
In [17]:
results = pints.MCMCSummary(chains=chains, time=time, parameter_names=["a", "b", "c", "sigma_V", "sigma_R"])
print(results)
Plot the few posterior predictive simulations versus data.
In [19]:
import pints.plot
pints.plot.series(np.vstack(chains), problem)
plt.show()