This notebook serves as a quick refresher on ordinary differential equations. If you are familiar with the topic: feel free to skim this notebook.
We will first consider the decay of tritium as an example:
$$ \mathrm{^3H \overset{\lambda}\rightarrow\ ^3He + e^- + \bar{\nu_e}} $$We will not concern ourselves with the products, instead we will only take interest in the number density of $\mathrm{^3H}$ as function of time, let's call it $y(t)$. The rate of change of $y(t)$ is proportional to itself and the decay constant ($\lambda$):
$$ \frac{dy(t)}{dt} = -\lambda y(t) $$you probably know the solution to this class of differential equations (either from experience or by guessing an appropriate ansatz). SymPy can of course also solve this equation:
In [ ]:
import sympy as sym
sym.init_printing()
t, l = sym.symbols('t lambda')
y = sym.Function('y')(t)
dydt = y.diff(t)
expr = sym.Eq(dydt, -l*y)
expr
In [ ]:
sym.dsolve(expr)
Now, pretend for a while that this function lacked an analytic solution. We could then integrate this equation numerically from an initial state for a predetermined amount of time by discretizing the time into a seriers of small steps.
For each step taken we would update $y$ by multiplying the derivative with the step size (assuming that the derivate is approximately constant on the scale of the step-size), formally this method is known as "forward Euler":
$$ y_{n+1} = y_n + y'(t_n)\cdot \Delta h $$this is known as an explicit method, i.e. the derivative at the current time step is used to calculate the next step forward.
For demonstration purposes only, we implement this in Python:
In [ ]:
import numpy as np
def euler_fw(rhs, y0, tout, params):
y0 = np.atleast_1d(np.asarray(y0, dtype=np.float64))
dydt = np.empty_like(y0)
yout = np.zeros((len(tout), len(y0)))
yout[0] = y0
t_old = tout[0]
for i, t in enumerate(tout[1:], 1):
dydt[:] = rhs(yout[i-1], t, *params)
h = t - t_old
yout[i] = yout[i-1] + dydt*h
t_old = t
return yout
applying this function on our model problem:
In [ ]:
def rhs(y, t, decay_constant):
return -decay_constant*y # the rate does not depend on time ("t")
In [ ]:
tout = np.linspace(0, 2e9, 100)
y0 = 3
params = (1.78e-9,) # 1 parameter, decay constant of tritium
yout = euler_fw(rhs, y0, tout, params)
and plotting the solution & the numerical error using matplotlib:
In [ ]:
import matplotlib.pyplot as plt
%matplotlib inline
def my_plot(tout, yout, params, xlbl='time / a.u.', ylabel=None, analytic=None):
fig, axes = plt.subplots(1, 2 if analytic else 1, figsize=(14, 4))
axes = np.atleast_1d(axes)
for i in range(yout.shape[1]):
axes[0].plot(tout, yout[:, i], label='y%d' % i)
if ylabel:
axes[0].set_ylabel(ylabel)
for ax in axes:
ax.set_xlabel(xlbl)
if analytic:
axes[0].plot(tout, analytic(tout, yout, params), '--')
axes[1].plot(tout, yout[:, 0] - yout[0]*np.exp(-params[0]*(tout-tout[0])))
if ylabel:
axes[1].set_ylabel('Error in ' + ylabel)
In [ ]:
def analytic(tout, yout, params):
return yout[0, 0]*np.exp(-params[0]*tout)
my_plot(tout, yout, params, analytic=analytic, ylabel='number density / a.u.')
We see that 100 points gave us almost plotting accuracy.
Unfortunately, Euler forward is not practical for most real world problems. Usually we want a higher order formula (the error in Euler forward scales only as $n^{-1}$), and we want to use an adaptive step size (larger steps when the function is smooth). So we use the well tested LSODA algorithm (provided in scipy as odeint
):
In [ ]:
from scipy.integrate import odeint
yout, info = odeint(rhs, y0, tout, params, full_output=True)
my_plot(tout, yout, params, analytic=analytic)
print("Number of function evaluations: %d" % info['nfe'][-1])
We can see that odeint
was able to achieve a much higher precision using fewer number of function evaluations.
For a large class of problems we need to base the step not on the derivative at the current time point, but rather at the next one (giving rise to an implicit expression). The simplest implicit stepper is "backward euler":
$$ y_{n+1} = y_n + y'(t_{n+1})\cdot \Delta h $$Problems requiring this type of steppers are known as "stiff". We will not go into the details of this (LSODA actually uses something more refined and switches between explicit and implicit steppers).
In the upcoming notebooks we will use odeint
to solve systems of ODEs (and not only linear equations as in this notebook). The emphasis is not on the numerical methods, but rather on how we, from symbolic expressions, can generate fast functions for the solver.
In order to show how we would formulate a system of differential equations we will here briefly look at the van der Pol osciallator. It is a second order differential equation:
$$ {d^2y_0 \over dx^2}-\mu(1-y_0^2){dy_0 \over dx}+y_0= 0 $$One way to reduce the order of our second order differential equation is to formulate it as a system of first order ODEs, using:
$$ y_1 = \dot y_0 $$which gives us:
$$ \begin{cases} \dot y_0 = y_1 \\ \dot y_1 = \mu(1-y_0^2) y_1-y_0 \end{cases} $$Let's call the function for this system of ordinary differential equations vdp
:
In [ ]:
def vdp(y, t, mu):
return [
y[1],
mu*(1-y[0]**2)*y[1] - y[0]
]
using "Euler forward":
In [ ]:
tout = np.linspace(0, 200, 1024)
y_init, params = [1, 0], (17,)
y_euler = euler_fw(vdp, y_init, tout, params) # never mind the warnings emitted here...
In [ ]:
my_plot(tout, y_euler, params)
That does not look like an oscillator. (we see that Euler forward has deviated to values with enormous magnitude), here the advanced treatment by the odeint
solver is far superior:
In [ ]:
y_odeint, info = odeint(vdp, y_init, tout, params, full_output=True)
print("Number of function evaluations: %d, number of Jacobian evaluations: %d" % (info['nfe'][-1], info['nje'][-1]))
my_plot(tout, y_odeint, params)
We see that LSODA has evaluated the Jacobian. But we never gave it an explicit representation of it―so how could it?
It estimated the Jacobian matrix by using finite differences. Let's see if we can do better if we provide a function to calculate the (analytic) Jacobian.
In [ ]:
help(odeint) # just skip to "Dfun"
so the signature needs to be: (state-vector, time, parameters) -> matrix
In [ ]:
%load_ext scipy2017codegen.exercise
Use either the %exercise
or %load
magic to get the exercise / solution respecitvely (i.e. delete the whole contents of the cell except for the uncommented magic command). Replace ??? with the correct expression.
Remember that our system is defined as: $$ \begin{cases} \dot y_0 = y_1 \\ \dot y_1 = \mu(1-y_0^2) y_1-y_0 \end{cases} $$
In [ ]:
%exercise exercise_jac_func.py
In [ ]:
J_func(y_init, tout[0], params[0])
In [ ]:
y_odeint, info = odeint(vdp, y_init, tout, params, full_output=True, Dfun=J_func)
In [ ]:
my_plot(tout, y_odeint, params)
print("Number of function evaluations: %d, number of Jacobian evaluations: %d" % (info['nfe'][-1], info['nje'][-1]))
So this time the integration needed to evaluate both the ODE system function and its Jacobian fewer times than when using finite difference approximations. The reason for this is that the more accurate the Jacobian is, the better is the convergence in the iterative (Newton's) method solving the implicit system of equations.
For larger systems of ODEs the importance of providing a (correct) analytic Jacobian can be much bigger.
In [ ]:
y = y0, y1 = sym.symbols('y0 y1')
mu = sym.symbols('mu')
J = sym.Matrix(vdp(y, None, mu)).jacobian(y)
J_func = sym.lambdify((y, t, mu), J)
J