Jens Hahn - 27/05/2016
Every numerical procedure to solve an ODE is based on the discretisation of the system and the difference quotient. It's very easy to understand, you just have to read the $\frac{\text{d}\vec{x}}{\text{d}t}$ as a $\frac{\Delta \vec{x}}{\Delta t}$. Then you can multiply both sides of the equation with $\Delta t$ and you have an equation describing the change of your variables during a certain time intervall $\Delta t$:
$$ \Delta \vec{x} = \vec{f}(\vec{x}, t)\times \Delta t$$Next step, the discretisation:
$$\vec{x}_{i+1} - \vec{x}_i = \vec{f}(\vec{x}_i, t)\times \Delta t$$Next thing is putting the $\vec{x}_i$ on the other side and naming $\Delta t$ to $h$:
$$\vec{x}_{i+1} = \vec{x}_i + \vec{f}(\vec{x}_i, t)\times h$$Of course, the smaller you choose the time intervall $h$, the more accurate your result will be in comparison to the analytical solution.
So it's clear, we chose a tiny one, right? Well, not exactly, the smaller your time intervall the longer the simulation will take. Therefore, we need a compromise and here the provided software will help us by constantly testing and observing the numerical solution and adapt the "step size" $h$ automatically.
The Euler's method is the simplest way to solve ODEs numerically. It can be written in a short formula.
$h$ is again the difference of time step: hi = $t{i+1} - t_i$
Then, the solution looks like that:
Unfortunately, this method is highly dependent on the size of $h_i$, the smaller the more accurate is the solution.
Another way to understand this is to take a look at the Riemann sum. You probably know it already, calculate the value of $f(x)$ and multiply it by the step size. So it's not a new idea to you.
Let's test the method with our well-known predator-prey model (Lotka-Volterra):
In [2]:
import numpy as np
# Lotka Volterra model
# initialise parameters
k1 = 1.5
k2 = 1.
k3 = 3.
k4 = 1.
def my_dxdt(s,t):
"""
Function returns values of derivatives of Lotka Volterra model
"""
return [k1*s[0] - k2*s[0]*s[1], - k3*s[1]+k4*s[0]*s[1]]
In [3]:
def my_euler_solver(dxdt, s0, timegrid):
"""
Implementation of a simple Euler method (constant stepsize)
"""
# first species values are s0
s = s0
# do timesteps
for j, time in enumerate(timegrid):
# first time step, just save initial values
if j == 0:
result = [[value] for value in s0]
continue
# next time step, calculate values and save them
for i, species in enumerate(s):
hi = (timegrid[j] - timegrid[j-1])
species = species + dxdt(s,time)[i] * hi
result[i].append(species)
# update species with new values
s[0] = result[0][-1]
s[1] = result[1][-1]
return result
To test the accuracy, we run the simulation with 2 different time grids, one with a step size of 0.01 and one with step size 0.001
In [4]:
import matplotlib.pyplot as plt
%matplotlib inline
# timegrids
timegrid_e3 = np.linspace(0,20,2000)
timegrid_e4 = np.linspace(0,20,20000)
# get solutions
s0=[5,10]
my_euler_result_e3 = my_euler_solver(my_dxdt, s0, timegrid_e3)
s0=[5,10]
my_euler_result_e4 = my_euler_solver(my_dxdt, s0, timegrid_e4)
If you want to increase the accuracy of your method, you could use the trapezoidal rule you know from the approximation of integrals. The second point is of course missing, but here you could use Euler's method!
As we will see, this method is a huge improvement compared to Euler's method!
$$\Phi (t,x,h) = x + \frac{h}{2}\Bigl(f(t,x)+f\bigl(t+h,\underbrace{x+h\cdot f(t,x)}_{Euler's\ method}\bigr)\Bigr)$$The idea of Runge and Kutta were quite straight forward: Why not using Heun's method recursive? To get the second point you do not use Euler's method but again the trapezoidal rule... and again... and again. This method is still used and very good for most of the ODE systems!
In [14]:
def my_heun_solver(dxdt, s0, timegrid):
"""
Implementation of the Heun method (constant stepsize)
"""
# first species values are s0
s = s0
# do timesteps
for j, time in enumerate(timegrid):
# first time step, just save initial values
if j == 0:
result = [[value] for value in s0]
continue
# next time step, calculate values and save them
for i, species in enumerate(s):
hi = (timegrid[j] - timegrid[j-1])
species = species + (hi/2)*(dxdt(s,time)[i]+dxdt([s[k]+hi*dxdt(s,time)[k] for k in range(len(s))], time+hi)[i])
result[i].append(species)
# update species with new values
s[0] = result[0][-1]
s[1] = result[1][-1]
return result
In [15]:
import matplotlib.pyplot as plt
%matplotlib inline
# timegrids
timegrid_e3 = np.linspace(0,20,2000)
timegrid_e4 = np.linspace(0,20,20000)
# plot results
s0=[5,10]
my_heun_result_e3 = my_heun_solver(my_dxdt, s0, timegrid_e3)
s0=[5,10]
my_heun_result_e4 = my_heun_solver(my_dxdt, s0, timegrid_e4)
Let's simulate the same system also with odeint
, the standard ODE solver of the scipy
Python package.
In [19]:
import scipy.integrate
timegrid = np.linspace(0,20,2000)
s0 = [5,10]
result = scipy.integrate.odeint(my_dxdt, s0, timegrid)
And now we compare the results. I marked the amplitude and position of the maxima with red dotted lines.
In [22]:
plt.figure(1)
plt.plot(timegrid_e3, my_euler_result_e3[0], label="X 0.01")
plt.plot(timegrid_e3, my_euler_result_e3[1], label="Y 0.01")
plt.plot(timegrid_e4, my_euler_result_e4[0], label="X 0.001")
plt.plot(timegrid_e4, my_euler_result_e4[1], label="Y 0.001")
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('Euler method')
plt.figure(2)
plt.plot(timegrid_e3, my_heun_result_e3[0], label="X 0.01")
plt.plot(timegrid_e3, my_heun_result_e3[1], label="Y 0.01")
plt.plot(timegrid_e4, my_heun_result_e4[0], label="X 0.001")
plt.plot(timegrid_e4, my_heun_result_e4[1], label="Y 0.001")
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('Heun method')
plt.figure(3)
plt.plot(timegrid, result.T[0], label='X')
plt.plot(timegrid, result.T[1], label='Y')
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('odeint')
Out[22]:
As you can see, the Heun's method seems to have already a remarkable accuracy, even if it is a very simple method. Let's compare the results of odeint
and the Heun's method directly:
In [21]:
plt.plot(timegrid, result.T[0], label='X odeint')
plt.plot(timegrid, result.T[1], label='Y odeint')
plt.legend(loc=2)
plt.plot(timegrid_e4, my_heun_result_e4[0], label="X Heun")
plt.plot(timegrid_e4, my_heun_result_e4[1], label="Y Heun")
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('Comparison odeint & Heun method')
Out[21]: