How to analyse your favourite dynamically reversible Markov model

This notebooks contains a simple example of how to analyse fluctuating currents in dynamically reversible Markov models.

NOTE: This script runs only on a python 2 kernel.

References

[1] Fluctuating Currents in Stochastic Thermodynamics I. Gauge Invariance of Asymptotic Statistics. Wachtel, Altaner and Vollmer (2015)

[2] Fluctuating Currents in Stochastic Thermodynamics II. Energy Conversion and Nonequilibrium Response in Kinesin Models. Altaner, Wachtel and Vollmer (2015)

Set-up jupyter (ipython-notebook) and sympy libraries


In [ ]:
# inline plotting/interaction
%pylab inline
# replace the line above with the line below for command line scripts:
# from pylab import *

from sympy import * # symbolic python
init_printing() # pretty printing

import numpy as np # numeric python

import time # timing, for performance monitoring

# activate latex text rendering
from matplotlib import rc
rc('text', usetex=True)

Defining models and using the algorithm presented in Ref [2]


In [ ]:
import cumulants # cumulants.py implements the algorithm presented in Ref [2]

A simple model

In this script, a model is a dictionary with

  • keys == edges (tuples of integers)
  • values == transition rates (symbolic expressions)

We start with the definition of simple model on four states with the following topology:

0 – 1
| \ |
3 – 2

It features two independent cycles (cf. Ref [1]).


In [ ]:
# We define the transition rates as symbolic expressions...
w01, w10, w02, w20, w03, w30, w12, w21, w23, w32 = \
    symbols("w_{01}, w_{10}, w_{02}, w_{20}, w_{03}, w_{30},"+\
            "w_{12}, w_{21}, w_{23}, w_{32}",\
            real=True, positive=True)

# ... and specify the model topology as a dictionary

model4State = {(0,1): w01, (1,0): w10,\
               (0,2): w02, (2,0): w20,\
               (0,3): w03, (3,0): w30,\
               (1,2): w12, (2,1): w21,\
               (2,3): w23, (3,2): w32 \
               }

Parametrizing the transition rates

In our toy model, transitions along the outer edges occur with rate $r$ and $l$ in clockwise and counter-clockwise directions, respectively.

For convenience, we will use the substitutions

$r = k*\exp{\frac{f}{2}}$ and $l=k*\exp{\frac{f}{2}}$,

which separates the anti-symmetric edge motance $f = \log{\frac{r}{l}}$ from the symmetric part $k=\sqrt{rl}$.

Transitions through the edge connecting states $0$ and $2$ have same symmetric part $b*k$, and a potentially different driving field $g$. The positive parameter $b$ can be used to turn off the center transition:

$w^0_2 = k*b*\exp{\frac{g}{2}}$ and $w^2_0 = k*b*\exp{\frac{g}{2}}$

As a global factor of the transition matrix, $k$ can be absorbed into the time-scale of the model and we set $k\equiv 1$.


In [ ]:
# Define the symbols for parameters, place-holders, etc.

b = symbols("b", positive=True)
f, g = symbols("f, g", real=True)

x, y      = symbols("x, y", real=True)
u, v, w   = symbols("u, v, w", positive=True)

# symmetric-antisymmetric substitutions
onehalf = Rational(1,2)

# Generate a substitution list to parametrize the symbolic transition rates

rate_cw =  exp(+onehalf*f)
rate_ccw = exp(-onehalf*f)

# Outer circle, clockwise
w01p = rate_cw
w12p = rate_cw
w23p = rate_cw
w30p = rate_cw

# Outer circle, counter-clockwise
w03p = rate_ccw
w32p = rate_ccw
w21p = rate_ccw
w10p = rate_ccw

# Center rates
w02p = b*exp(+onehalf*g)  
w20p = b*exp(-onehalf*g)

# Create the substitution list that gives the parametric dependence of the rates:
rates_parametrized = [(w01,w01p),(w10,w10p),(w02,w02p),(w20,w20p),\
           (w12,w12p),(w21,w21p),(w23,w23p),(w32,w32p),(w30,w30p),(w03,w03p)]

Calculating the first and second fundamental cumulants

The function getCumulants(model, chords, rates_params) from the cumulants library takes a dictionary model and a corresponding substitution list rates_params to calculate the fundamental cumulants specified by the fundamental chords given in chords.

While some consistency checks are given, you have to be sure that removing the edges contained in chords from the graph of the model yields a spanning tree, see Ref. [1].

Here, we chose the edges $(0,1)$ and $(0,2)$ as fundamental chords. The corresponding fundamental cycles are then the outer circuit and the lower half circuit (both in clockwise direction):

0 –> 1
|    |   (0,1)  
3 –– 2

0
| \
|  \     (0,2)  
3 – 2

In [ ]:
# calculate the cumulants (this takes only a few seconds)

c, C = cumulants.getCumulants( model4State, [(0,1),(0,2)], rates_parametrized)

In [ ]:
## WARNING: READ BEFORE EXECUTING THIS CELL
##
## Additional simplifications may help if numerical problems are encountered (e.g. for very stiff rate matrices).
## They can be very time-consuming.
## Note: The time of simplification steps strongly increases with the number of free symbolic parameters
## 
## For the present example, they are NOT needed!
## 


t0 = time.time() # Time the simplification steps

c = simplify(factor(c))
C = factor(simplify(C))

display(time.time() - t0)

Observables

The fundamental cumulants regard the counting statistics along the fundamental chords. As shown in Ref. [1], this is sufficient in order to determine the counting statistics of arbitrary observables.

In the following we consider three observables:

  1. The current flowing in the top edge (fundamental chord $(0,1)$, i.e. c[0]).
  2. The current through the center edge (fundamental chord $(0,2)$, i.e. c[1]).
  3. The entropy production of the whole network

The entropy production is an observable that takes the value $\log{\frac{w_{\rightarrow}}{w_{\leftarrow}}}$ on each edge, where $w_{\rightarrow}$ and $w_{\leftarrow}$ are the corresponding forward and backward rates. The generalized Schnakenberg decomposition (cf. Ref [1]) ensures that for the calculation of the entropy statistics, we only need the affinities of the fundamental cycles.


In [ ]:
## Calculate cycle affinities:

# First fundamental cycle (for chord $(0,1)$): (0,1,2,3):
aff0 = simplify((log((w01*w12*w23*w30)/(w10*w21*w32*w03))).subs(rates_parametrized))
# Second fundamental cycle (for chord $(0,2)$ (0,2,3):
aff1 = simplify((log((w02*w23*w30)/(w03*w32*w20))).subs(rates_parametrized))

# affinities should be $4f$ and $2f+g$, respectively:
display((aff0,aff1))

## Define expressions for quantities of interest

topc = c[0] # average current through top edge
cenc = c[1] # average current through center edge

ep = simplify(c[0]*aff0 + c[1]*aff1) # entropy production

topd = onehalf*C[0,0] # two times variance yields diffusion constants
cend = onehalf*C[1,1]

cov = C[0,1] #co-variance of both currents

res = 2*c[0].diff(f)/C[0,0] # response of top current to the driving affinity $f$ divided by top diffusion constant
#res = simplify(res)

Exploring the parameter dependence

Our model depends on three symbolic parameters: $(f,g,b)$


In [ ]:
# Show some analytical expressions for...

# ...top and center average currents
display('Average currents')
display(topc)
display(cenc)

# ...entropy production
display('Entropy production')
display(ep)

## ...diffusion constants and response (WARNING: these are longish expressions...)
#display('Diffusion constants')
#display(topd)
#display(cend)

#display('Normalized response')
#display(res)

Plotting

Efficient (and numerically stable) plotting requires us to transform symbolic expressions into lambda functions (using the method lambdify()). The lambda functions are used to evaluate the expressions on a grid.

NOTE: lambdify() has problems with symbolic expressions that have $\LaTeX$ commands as their representations. If you use something like

eps = symbols('\\varepsilon')

lambdify() may not work and you have to substitute the $\LaTeX$-style expressions by other (dummy) variables.


In [ ]:
## Lambdify all SymPy expressions into NumPy Expressions
topcL, cencL, epL, topdL, cendL, covL, resL\
    = [ lambdify( (f,g,b), N(thing), "numpy" )\
        for thing in ( topc, cenc, ep, topd, cend, cov, res\
        ) ]

In [ ]:
## Prepare 2D plotting range

from pylab import meshgrid,cm,imshow,contour,clabel,colorbar,axis,title,show

# prepare the plotting grid
[xmin, xmax, ymin, ymax] = [-10,10,.01,20] # boundaries of the grid
resolution = 400 # plot resolution

plotarea = [xmin, xmax, ymin, ymax] # full plotarea

# prepare the plotting grid for kinesin 6 figures
xpts = linspace(xmin, xmax, resolution)
ypts = linspace(ymin, ymax, resolution)
X, Y  = meshgrid(xpts, ypts)

In [ ]:
## General setup for figures

fig_size = (6,5) # in inch
fs = 22 # font size
colormap1 = cm.gist_earth # linear color gradient (blue) for densityplots
colormap2 = cm.coolwarm # color gradient (red-white-blue) for densityplots with highlighted center

# font setup
font = {'family' : 'serif',
        'color'  : 'black',
        'weight' : 'normal',
        'size'   : fs,
        }

ts = 16 # tick+contour label size

figdir = "toymodel/"

## This function takes a lambda function and creates a (logarithmic) 2D plot
##  g: lambda function with *exactly two* arguments
##  t: title string
##  x,y: x and y axis strings
##  logplot: flag whether to plot in logscale or not
##  crop: min/max values

def pplot(g, t, x='', y='', logplot=True, highlight=0, crop=[]):
    fig = figure(figsize=fig_size)
    
    if(logplot):
        G = np.log(np.abs(g(X,Y)))/np.log(10)
        GG = g(X,Y)
        ccmm = colormap1
    else:
        G = g(X,Y)
        GG = G
        ccmm = colormap2

    # the slicing parameter [::-1] reverses the y-axis before plotting
    
    im = imshow( G[::-1], cmap=ccmm, extent=plotarea ) # drawing the function
    if(len(crop)==2):
        im.set_clim(vmin=crop[0], vmax=crop[1])
        
    # adding the contour lines with labels
    cset1    = contour( X,Y, G, arange(-20,20,1),linewidths=1,linestyles="-",colors='black')
    stalling = contour( X,Y, GG,    [highlight], linewidths=3,linestyles="-",colors='white')
    
    # adding the colorbar on the right
    cb = colorbar(im)
    
    # latex fashion title
    title(t, fontdict=font)
    xlabel(x, fontdict=font)
    ylabel(y, fontdict=font)
    
    # Set tick label size
    tick_params(axis='both', which='major', labelsize=ts )

    #savefig(figdir+t+".png")
    return(fig)

In [ ]:
## NOTE: Our lambda functions take three parameters (f,g,b).
## For plotting, we need to define a new anonymous lambda function, that takes only two parameters

# Currents and EP depending on two forces f,g with same symmetric contribution on the center edge
pplot(lambda f,g: topcL(f,g,1),"Average current through top edge (log scale)",'$f$','$g$')
pplot(lambda f,g: cencL(f,g,1),"Average current through center edge (log scale)",'$f$','$g$')
pplot(lambda f,g: epL(f,g,1),"Steady state entropy production (log scale)",'$f$','$g$')

# Currents and EP depending on one force f=g while increasing strength of center edge
pplot(lambda f,b: topcL(f,f,b),"Average current through top edge (log scale)",'$f$','$b$')
pplot(lambda f,b: cencL(f,f,b),"Average current through center edge (log scale)",'$f$','$b$')
pplot(lambda f,b: epL(f,f,b),"Steady state entropy production (log scale)",'$f$','$b$')

print("Done")

In [ ]:
# Diffusion constant and normalized response depending on two forces f,g with same symmetric contribution on the center edge
pplot(lambda f,g: topdL(f,g,1),"Diffusion top edge (log scale)",'$f$','$g$')
pplot(lambda f,g: cendL(f,g,1),"Diffusion center edge (log scale)",'$f$','$g$')
pplot(lambda f,g: resL(f,g,1),"Normalized $f$-response in top edge",'$f$','$g$', False)

# Diffusion constant and normalized response  depending on one force f=g while increasing strength of center edge
pplot(lambda f,b: topdL(f,f,b),"Average current through top edge (log scale)",'$f$','$b$')
pplot(lambda f,b: cendL(f,f,b),"Average current through center edge (log scale)",'$f$','$b$')
pplot(lambda f,b: resL(f,f,b),"Normalized $f$-response in top edge",'$f$','$b$', False)

print("Done")

In [ ]: