Beeler-Reuter AP Model

In this example we use the 1977 Beeler-Reuter model of the mammalian ventricular action potential action potential, and try to estimate its maximum conductance parameters from its output.

The model describes several ion currents, each with a maximum conductance parameter, that together give rise to the cardiac AP and calcium transient. In this (non-trivial) "toy" model, we use the maximum conductances as parameters, and the AP and calcium transient as observable outputs. To make the problem tractable, all other model parameters are assumed to be known.

The parameters are scaled: instead of passing in the conductances directly, users provide the natural log of the maximum conductances. This makes the parameters easier to find for our optimisation algorithm.

For the science behind the model, see the original 1977 paper.


In [1]:
import pints
import pints.toy
import matplotlib.pyplot as plt
import numpy as np

model = pints.toy.ActionPotentialModel()

We can get an example set of parameters using the suggested_parameters() method:


In [2]:
x_true = model.suggested_parameters()

And it can also provide a suggested sequence of sampling times:


In [3]:
times = model.suggested_times()

Using the suggested parameters and times, we can run a simulation:


In [4]:
values = model.simulate(x_true, times)

This gives us all we need to create a plot of current versus time:


In [5]:
_, axes = plt.subplots(2, 1)
axes[0].plot(times, values[:, 0])
axes[1].plot(times, values[:, 1])
axes[0].set_ylabel('Voltage [mV]')
axes[1].set_ylabel('Calcium [mM]')
axes[1].set_xlabel('Time [ms]')
plt.show()


Now we will add some noise to generate some fake "experimental" data and try to recover the original parameters.


In [6]:
# Add noise
values[:, 0] += np.random.normal(0, 1, values[:, 0].shape)
values[:, 1] += np.random.normal(0, 5e-7, values[:, 1].shape)

# Create an object with links to the model and time series
problem = pints.MultiOutputProblem(model, times, values)

# Add weighting to the outputs
# (we believe both outputs should contribute roughly equally to the score function
#  so we scale it with roughly the values that it spans)
weights = [1. / 70., 1. / 0.000006]

# Select a score function
score = pints.SumOfSquaresError(problem, weights=weights)

# Select some tight boundaries
lower = x_true - 1
upper = x_true + 1
boundaries = pints.RectangularBoundaries(lower, upper)

# Perform an optimization
# For this example, we will start quite close to the true solution to limit the number of iterations
x0 = x_true + 0.25
optimiser = pints.OptimisationController(score, x0, boundaries=boundaries, method=pints.CMAES)

print('Running...')
found_parameters, found_score = optimiser.run()

# Compare parameters with original
print('Found solution:          True parameters:' )
for k, x in enumerate(found_parameters):
    # We take an exponential here to show the actual conductance values
    print(pints.strfloat(np.exp(x)) + '    ' + pints.strfloat(np.exp(x0[k])))


Running...
Minimising error measure
using Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
Running in sequential mode.
Population size: 8
Iter. Eval. Best      Time m:s
0     8      12.49416   0:00.9
1     16     11.57587   0:01.9
2     24     11.57587   0:02.9
3     32     11.45721   0:03.8
10    80     11.45721   0:08.8
Halting: Maximum number of iterations (10) reached.
Found solution:          True parameters:
 4.17219904709831546e+00     4.20508438550409647e+00
 3.14367650342651161e-03     3.15381328912807055e-03
 9.39651794702475590e-02     9.46143986738421233e-02
 3.64168052428662792e-01     3.67944883731608385e-01
 8.17133456668796998e-01     8.41016877100819293e-01

We can create a figure to show how the obtained parameters differ from the original ones:


In [7]:
plt.figure()
logGRatio = (found_parameters - x0) / np.log(10)
x = range(len(logGRatio))
plt.bar(x, logGRatio)
plt.xticks(x, (r'$G_{Na}$', r'$G_{NaC}$', r'$G_{Ca}$', r'$G_{K1}$', r'$G_{x1}$'))
plt.ylabel(r'$\log(G_{found} / G_{true})$')
plt.show()


Note that the differences are very small.

We can now compare the true and fitted model output:


In [8]:
# Evaluate model at found parameters
found_values = problem.evaluate(found_parameters)

# Show quality of fit
_, axes = plt.subplots(2, 1)
axes[0].plot(times, values[:, 0], label='Noisy data')
axes[0].plot(times, found_values[:, 0], label='Fit')
axes[1].plot(times, values[:, 1], label='Noisy data')
axes[1].plot(times, found_values[:, 1], label='Fit')
axes[0].set_ylabel('Voltage [ms]')
axes[1].set_ylabel('Calcium [mM]')
axes[1].set_xlabel('Time [ms]')
axes[0].legend()
axes[1].legend()
plt.show()