The Runge-Kutta Method, Higher-Order ODEs and Multistep Methods

By Thomas P Ogden (t.p.ogden@durham.ac.uk)

Here we introduce the classical Runge-Kutta method, go beyond first-order ODEs, and take a first look at multistep methods.


In [2]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

In Part 1 we looked at the Explicit Euler method, and checked that it had a global order of accuracy $\mathcal{O}(h^1)$. Now we'll again take an ODE $y' = f(t,y(t))$ with an initial condition $y(t_0) = y_0$ and look at more advanced numerical methods for solving the problem.

The Runge-Kutta Method

For the Explicit Euler method we truncated the Taylor expansion after the linear term. It's clear that including more terms before truncation will give us a higher-order approximation. The classical Runge-Kutta method does just this, up to an order of accuracy $\mathcal{O}(h^4)$ — 3 orders higher than Explicit Euler! We're not going to derive the approximation here but you can look it up if you're interested. The finite difference step is given by

$$ y_{n+1} = y_n + \tfrac{h}{6} \left( k_1 + 2k_2 + 2k_3 + k_4 \right), $$

where

$$ \begin{align} k_1 &= f(t_n, y_n), \\ k_2 &= f(t_n + \tfrac{h}{2}, y_n + \tfrac{h}{2}k_1), \\ k_3 &= f(t_n + \tfrac{h}{2}, y_n + \tfrac{h}{2}k_2), \\ k_4 &= f(t_n + h, y_n + h k_3). \end{align} $$

Implementing the Method in Python

Just like we did with the Explicit Euler method, we'll define a function to implement the Runge-Kutta method for a first-order ODE system.


In [3]:
def ode_int_rk(func, y_0, t, args={}):
    """ Classical Runge-Kutta (RK4) approximation to a first-order ODE system
    with initial conditions.

    Args:
        func: (callable) The first-order ODE system to be approximated.
        y_0: (array) The initial condition.
        t: (array) A sequence of time points for which to solve for y.
        args: (dict) Extra arguments to pass to function.

    Out:
        y: (array) the approximated solution of the system at each time in t,
            with the initial value y_0 in the first row.
    """

    # Initialise the approximation array
    y = np.zeros([len(t), len(y_0)])
    y[0] = y_0

    # Loop through the time steps, approximating this step from the prev step
    for i, t_i in enumerate(t[:-1]):

        h = t[i+1] - t_i # size of the step

        k_1 = func(t_i, y[i], args)
        k_2 = func(t_i+h/2., y[i]+h/2.*k_1, args)
        k_3 = func(t_i+h/2., y[i]+h/2.*k_2, args)
        k_4 = func(t_i+h, y[i]+h*k_3, args)

        y[i+1] = y[i] + h/6.*(k_1 + 2.*k_2 + 2.*k_3 + k_4) # RK4 step

    return y

Checking Accuracy

We'll define our test function exp just as we did in Part 1.


In [4]:
from exp import exp # same function we defined in Part 1.

And just as we did for the Explicit Euler, we'll use this test function to check the order of accuracy of the method by looking over a wide range of stepsizes.


In [5]:
y_0 = np.array([1.]) # Initial condition

solve_args = {}
solve_args['a'] = 1.

t_max = 5.

# Range of stepsizes
max_N = 12
N = 2**np.arange(2, max_N) # N = 2, 4, 8, ..., 2^max_N

order_check = 4 # for visual check of the order of accuracy

y_end = np.zeros(len(N)) # array to fill with the final values
stepsize = np.zeros(len(N)) # array to fill with the stepsizes

for i, N_i in enumerate(N): # loop over different numbers of steps

    t = np.linspace(0., t_max, N_i+1)
    y_end[i] = ode_int_rk(exp, y_0, t, solve_args)[-1]
    
    stepsize[i] = t_max/N_i
    
plt.loglog(stepsize, abs(y_end - np.exp(solve_args['a']*t_max)), 
           'b-o', label='Global error')
plt.loglog(stepsize, stepsize**order_check,'k--', label=r'$h^4$')
plt.xlabel(r'$h$')
plt.legend(loc=2)


Out[5]:
<matplotlib.legend.Legend at 0x3851310>

So we've confirmed that the Runge-Kutta method is $\mathcal{O}(h^4)$ – so that the global error gets smaller much quicker with increased number of steps than the Explicit Euler.

Example 2: A Forced and Damped Pendulum

The next problem we're going to look at is a second-order ODE, but crucially any higher-order ODE can be rewritten as a set of coupled first-order equations, such that we can apply the solvers we've already designed. We'll see how to do that with this example.

The Physical Problem

Take an idealised pendulum: a weightless string of length $\ell$, fixed at one end with a mass $m$ at the other. The pendulum is free to swing in a plane subject to gravity, friction proportional to its velocity $v$, and may be driven by an external periodic force $F_d \cos {\omega_d t}$.

We start with Newton's second law applied to the horizontal displacement $x$ for the unforced and undamped pendulum,

$$m\ddot{x} = -mg \sin{\theta}$$

where $g$ is the local acceleration due to gravity and $\theta{(t)}$ is the angle of displacement of the string from vertical at time $t$. We now apply a frictional force proportional to the translational velocity and rearrange to obtain a second-order homogeneous differential equation,

$$ m \ddot{x} + k \ell \dot{x} + mg \sin{\theta} = 0 $$

where $k$ is the coefficient of friction.

We want to consider angular displacement, so substitute $\dot{x} = \ell \dot{\theta}$ and $\ddot{x} = \ell \ddot{\theta}$, and now apply the driving force

$$ m \ell \ddot{\theta} + k \ell \dot{\theta} + mg \sin{\theta} = F_d \cos {\omega_d t} $$

The equation is nonlinear (due to the sine function), so finding an analytic solution is going to be difficult. Numerical methods are the next line of attack, so we'll try our Runge-Kutta method on the problem. First we must rewrite the second-order equation as a set of coupled first-order equations. Let $y_0 = \theta$, $y_1 = \dot{\theta}$ and $y_2 = \ddot{\theta}$. Then

$$ \begin{align*} y_0' &= y_1 = \dot{\theta} \\ y_1' &= y_2 = \ddot{\theta} = -\frac{k}{m}\dot{\theta} - \frac{g}{\ell} \sin{\theta} + \frac{F_d}{m \ell} cos{\omega_d t} \end{align*} $$

is the system we want. We make a final tidying of the parameters by letting $\alpha = g/\ell$, $\beta = k/m$ and $\gamma = F/m\ell$

$$ \begin{align*} y_0' &= y_1 \\ y_1' &= -\alpha \sin{y_0} -\beta y_1 + \gamma \cos{\omega t} \end{align*} $$

Setting up the Problem in Python

We write this pair of euqations up as a Python function to be passed to our ODE integrator:


In [6]:
def pendulum(t, y, args):
    """ A damped and forced pendulum, described as set of two first-order ODEs.

    Args:
        t: Time
        y: Pendulum system vector [angle, angular velocity] 
        args['alpha']: gravity_acc/length_of_pendulum
        args['beta']: friction_constant/mass_pendulum
        args['gamma']: driving_force/mass_pendulum/length_pendulum
        args['omega']: driving_freq

    Returns:
        dydt: ODE vector
    """

    dydt = np.zeros(2)

    dydt[0] = y[1]
    dydt[1] = (-args['alpha']*np.sin(y[0]) - args['beta']*y[1] + 
                args['gamma']*np.cos(args['omega']*t))

    return dydt

Now we're ready to select some parameters. First we'll remove friction ($k = 0$) and apply no driving force ($F_d = 0$), under which conditions we expect to see simple harmonic motion of the pendulum back and forth.


In [7]:
### Parameters

gravity_acc = 10. # [m /s2]

length_pendulum = 1. # [m]
mass_pendulum = 1. # [kg]
friction_constant = 0. # [kg /m /s]

driving_force = 0. # [N]
driving_freq = 0. # [2π /s]

N = 200

t = np.linspace(0., 10., N+1) # [s] an array of time steps

We're going to pass the arguments to the solver as a dict, so let's make that now.


In [8]:
solve_args = {}
solve_args['alpha'] = gravity_acc/length_pendulum
solve_args['beta'] = friction_constant/mass_pendulum
solve_args['gamma'] = driving_force/mass_pendulum/length_pendulum
solve_args['omega'] = driving_freq

Next we need some initial conditions. We'll pick $\theta_0 = \tfrac{\pi}{8}$ so that we can check our numerical result with the small angle approximation,

$$\sin \theta \approx \theta.$$

With this approximation, the undamped and unforced solution can be found analytically: $y = y_0 \cos (\sqrt{\alpha} t)$ (you can check this by substituting it into the original ODE).


In [9]:
initial_ang = np.pi/8 # [rad]
initial_ang_vel = 0. # [rad /s]

initial_cond = np.array([initial_ang, initial_ang_vel])

Solving the Problem

Now we can solve the ODE system using the Runge-Kutta method we made above and plot the angle $y_0$ over time.


In [11]:
from scipy.integrate import odeint

# y = odeint(pendulum, initial_cond, t, args=(solve_args,))

# Solve Pendulum ODE with RK4
y = ode_int_rk(pendulum, initial_cond, t, solve_args)

y_small_ang = initial_ang*np.cos(np.sqrt(solve_args['alpha'])*t)

plt.plot(t, y[:,0], c='b', label='Angle')
plt.plot(t, y_small_ang, 'r--', label='Small angle approx.')
plt.xlabel('Time (s)')
plt.ylabel('Angle (rad)')
plt.legend(loc=2)


Out[11]:
<matplotlib.legend.Legend at 0x4101950>

So we see that the pendulum oscillates as expected and the frequency matches the known small angle result reasonably well for a few swings. (Subject to the error in the small angle approximation. Try making initial_ang bigger or smaller to see where the approximation works well.)

Other perspectives we might be interested in are the the pendulum's trajectory in phase space, which we plot in a phase diagram and its spectral properties, via a power spectrum. We'll plot those now.


In [161]:
def plot_pendulum(t,y):
    """ Plot Angle, Phase Diagram, FFT. """
    
    fig = plt.figure()

    # Plot Angle
    ax_1 = fig.add_subplot(211)
    ax_1.plot(t, y[:,0], c='b')
    ax_1.set_xlabel('Time (s)')
    ax_1.set_ylabel('Angle (rad)')
    
    # Plot Phase Diagram
    ax_2 = fig.add_subplot(223)
    ax_2.plot(y[:,0], y[:,1], c='g')
    ax_2.set_xlabel('Angle (rad)')
    ax_2.set_ylabel('Angular Velocity (rad /s)')
    
    # Fourier Transform
    f_fft = np.fft.fftfreq(len(t), t[1]-t[0])
    y_fft = np.fft.fft(y[:,0])/np.sqrt(2*len(t))
    
    # Plot Power Spectrum
    ax_3 = fig.add_subplot(224)
    ax_3.plot(f_fft[:N/2]*2*np.pi, abs(y_fft[:N/2]), c='r')
    ax_3.set_xlim([0, 30])
    ax_3.set_xlabel('Ang Freq ($2 \pi$ Hz)')
    ax_3.set_ylabel('Power')

In [162]:
plot_pendulum(t,y)


The phase diagram (green, bottom left) shows each possible physical state for the system. We see the undamped, unforced pendulum follows a regular orbit. The power spectrum (red, bottom right) is found by taking a discrete Fourier transform using NumPy's fftpack module. We see a single narrow peak at $\omega \approx \sqrt{\alpha}$, the pendulum's natural frequency.

Adding Friction

To better model a physical pendulum we might add a friction constant $k$, and see what happens when we solve the system with this included.


In [163]:
friction_constant = .5 # [kg /m /s]
solve_args['beta'] = friction_constant/mass_pendulum

# Solve Pendulum ODE with RK4
y = ode_int_rk(pendulum, initial_cond, t, solve_args)

plot_pendulum(t,y)


We see that over time the friction attenuates the pendulum's swing. The phase space trajectory no longer orbits consistently but spirals in, and the spectral peak has been widened out to include a spread of lower frequency components.

 Adding a Driving Force


In [164]:
driving_force = 10. # [N]
driving_freq = 10. # [2π /s]

solve_args['gamma'] = driving_force/mass_pendulum/length_pendulum
solve_args['omega'] = driving_freq

# Solve Pendulum ODE with RK4
y = ode_int_rk(pendulum, initial_cond, t, solve_args)

plot_pendulum(t,y)


The driven system is of course more interesting! The phase space trajectory looks chaotic, but the pendulum's behaviour is clear in the power spectrum, where we see we now have an additional peak at $\omega_F$.

Anyway, there's plenty interesting to explore in the damped forced pendulum system, but as we're just interested in the integration methods here, we'll move on. The takeaway is: by converting our second-order ODE into a system of first-order ODEs, we are able to solve it using the methods we've already written. And crucially, this can be done for any higher-order ODE.

The Two-Step Adams-Bashforth Method

Remember that with these different methods, we're always looking to do the same thing: to choose a numerical approximation to the integral in

$$ y(t_{n+1}) = y(t_n) + \int_{t_n}^{t_{n+1}} \! f(t,y(t)) \, \mathrm{d}t. $$

So far, we've looked at single-step methods where we only use one known point to estimate the next one. But one way to improve the accuracy of our method is to use more known points. Such approximations are called multistep methods, and we'll look at an example now.

Where the Expliit Euler method takes the slope $f$ to be a constant on the interval $[t_n, t_{n+1}]$, the idea behind Adams-Bashforth methods is to approxmiate $f$ by a Lagrange interpolating polynomial:

$$ P(t) = \sum_{j=1}^{m}{P_j(t)} $$

where

$$ P_j(t) = y_j \prod_{\substack{k=1 \\ k \ne j}}^{m}{ \frac{t - t_k}{t_j - t_k} }. $$

Here $P(t)$ is the polynomial of degree $\le (m-1)$ that passes through the $m$ points $(t_1, y_1 = f(t_1))$, $(t_2, y_2 = f(t_2))$ $\dots$ $(t_m, y_m = f(t_m))$. We'll take the linear $(m = 2)$ interpolant on the point $t_{n}$ and an earlier point $t_{n-1}$, so we have

$$ P(t) = f(t_n, y_n)\frac{t - t_{n-1}}{t_n - t_{n-1}} + f(t_{n-1}, y_{n-1})\frac{t - t_{n}}{t_{n-1} - t_n}. $$

Now if we put this approximating polynomial into the integral of, we find

\begin{align} \int_{t_n}^{t_{n+1}} \! f(t,y(t)) \, \mathrm{d}t \approx \int_{t_n}^{t_{n+1}} \! P(t) \, \mathrm{d}t &= \int_{t_n}^{t_{n+1}} \! \left[ f(t_n, y_n)\frac{t - t_{n-1}}{t_n - t_{n-1}} + f(t_{n-1}, y_{n-1})\frac{t - t_{n}}{t_{n-1} - t_n} \right] \mathrm{d}t \\ &= \frac{(t_n - t_{n+1})}{2(t_{n-1}-t_n)} \left[ f(t_n,y_n)(t_n + t_{n+1} - 2t_{n-1}) - f(t_{n-1},y_{n-1})(t_n - t_{n+1}) \right] \end{align}

Step Sizes

If we let $h_1 := t_n - t_{n-1}$ and $h_2 := t_{n+1} - t_n$ then

$$ \int_{t_n}^{t_{n+1}} \! P(t) \, \mathrm{d}t = \frac{h_2}{2 h_1} \left[ (2 h_1 + h_2) f(t_n,y_n) - h_2 f(t_{n-1},y_{n-1}) \right]. $$

Putting this back into $\color{grey}{[1]}$, we get

$$ y(t_{n+1}) \approx y(t_{n}) + \frac{h_2}{2 h_1} \left[ (2 h_1 + h_2) f(t_n,y_n) - h_2 f(t_{n-1},y_{n-1}) \right] $$

and our sequence of approximation points $y_n$ is calculated as

$$ y_{n+1} = y_n + \frac{h_2}{2 h_1} \left[ (2 h_1 + h_2) f(t_n,y_n) - h_2 f(t_{n-1},y_{n-1}) \right] $$

for $n = 1, 2, \dots N$.

If the steps are of equal size, i.e. $h := h_1 = h_2$ we find

$$ y_{n+1} = y_n + \frac{3}{2} h f(t_n,y_n) - \frac{1}{2} h f(t_{n-1}, t_{n-1}) $$

which is the standard two-step Adams-Bashforth method.

Accuracy

Replacing $f(t,y(t))$ with the interpolant $P(t)$ incurs a global error of order $\mathcal{O}(h^m)$, so in the case of the two-step method we have $\mathcal{O}(h^2)$.

Note that if you follow the same derivation with $m = 1$ you get the Euler method — so the Euler method is also in fact the one-step Adams-Bashforth method.

Next

In Part 3 we'll look at an example of a stiff problem, which resists solution by the Explicit Euler method, and so discover another measure of a numerical method to go with its accuracy: the numerical stability. We'll then introduce implicit methods, which allow us to solve stiff problems.