The Explicit Euler Method and Order of Accuracy

By Thomas P Ogden (t.p.ogden@durham.ac.uk)

Here we introduce numerical integration of ODEs using finite difference (FD) methods with the Explicit Euler method, use it to solve a first-order ODE and compare with the analytic known solution to check the order of accuracy.


In [6]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

Introduction

Say we have an ordinary differential equation (ODE) $y' = f(t,y(t))$ with an initial condition $y(t_0) = y_0$ and we want to solve it numerically. If we know $y(t)$ at a time $t_n$ and want to know what $y$ is at a later time $t_{n+1}$, the fundametal theorem of calculus tells us that we find it by integrating $y'$ over the time interval,

$$ y(t_{n+1}) = y(t_n) + \int_{t_n}^{t_{n+1}} \! y'(t) \, \mathrm{d}t = y(t_n) + \int_{t_n}^{t_{n+1}} \! f(t,y(t)) \, \mathrm{d}t.$$

The idea behind any ODE solver is to compute the right-hand-side integral for some numerical approximation of $f$. The problem is then computed over a series of steps $n = 1, 2, \dots N$ to give a sequence of points $y_n$ which approximate $y(t)$ to some order of accuracy as a function of the stepsize. The method is consistent if the local error (i.e. the error from step $n$ to step $n+1$) goes to zero faster than the stepsize ($t_{n+1} - t_n$) goes to zero.

The Explicit Euler Method

The first numerical approximations to $f$ we're going to look at are based on Taylor series expansions. We know we can expand any infinitely differentiable function $f$ in a Taylor series around the point $t_n$ as

$$ y(t_{n+1}) = y(t_{n}) + \frac{y'(t_n)}{1!} h + \frac{y''(t_n)}{2!} h^2 + \frac{y'''(t_n)}{3!} h^3 + \dots $$

where $h := t_{n+1} - t_{n}$ is the distance between the points. We'll focus on methods where $h$ is the same for each step (i.e. $\forall n$).

The most basic approximation then, called the Explicit Euler method, is to truncate the series at the linear term, discarding quadratic and higher terms. Putting in our ODE $y' = f(t,y(t))$ gives us the finite difference step

$$ y(t_{n+1}) \approx y(t_n) + hf(t_n, y_n) + \mathcal{O}(h^2) $$

and so our sequence of approximation points $y_n$ is calculated as

$$ y_{n+1} = y_n + hf(t_n, y_n). $$

As we decided to truncate the Taylor expansion after the linear term, the error at each step, called the local error, will be on the order of $\mathcal{O}(h^2)$ if $h$ is small and the cumulative effect of these errors over many steps, called the global error, will be on the order of $\mathcal{O}(h^1)$.

Implementing the Method in Python

We'll now define a function to implement the Explicit Euler method for a first-order ODE given an initial condition. Below we'll try it out on a test problem.


In [7]:
def ode_int_ee(func, y_0, t, args={}):
    """ Explicit Euler approximation to an first-order ODE system with initial
    conditions.

    Args:
        func: (callable) The first-order ODE system to be approximated.
        y_0: (array) The initial condition.
        t: (array) A sequence of time points for which to solve for y.
        args: (dict) Extra arguments to pass to function.

    Out:
        y: (array) the approximated solution of the system at each time in t,
            with the initial value y_0 in the first row.
    """

    y = np.zeros([len(t), len(y_0)]) # Initialise the approximation array
    y[0] = y_0

    for i, t_i in enumerate(t[:-1]): # Loop through the time steps

        h = t[i+1] - t_i # size of the step
        y[i+1] = y[i] + h*func(t_i, y[i], args) # Euler step

    return y

Example 1: An Exponential

To see that our Explicit Euler solver actually works, let's take a simple ODE:

$$y'(t) = ay(t)$$

with intial value $y(0) = 1$ and $a \in \mathbb{C}$. We know the analytic solution of this equation is $y = \mathrm{e}^{at}$ so we can check the accuracy of the method against this.

Here's that ODE written as a Python function.


In [8]:
def exp(t, y, args):
    """ An exponential function described as a first-order ODE. """
    
    dydt = args['a']*y
    return dydt

I've gone for a one-line docstring in this case as the ODE function is short. We would document the arguments and return variable for a more involved system. It might seem odd to have $a$ passed as an item of the dict args, but for more involved problems we'll want to pass more arguments, and this is a tidy way.

Solving the Problem

Let's set our initial condition $y_0$ and make our list of timesteps $t_1, t_2 \dots t_N$.


In [9]:
y_0 = np.array([1.]) # Initial condition

N = 10 # Num of steps to take
t_max = 5.
t = np.linspace(0., t_max, N+1) # Array of time steps

And we need our factor $a$, which we'll just set to $1$ for now.


In [10]:
solve_args = {'a':1}

Now we're ready to solve using the Explicit Eulder integrator we wrote, and we'll plot the results alongside the known analytic result $y = \mathrm{e}^{t}$.


In [11]:
y = ode_int_ee(exp, y_0, t, solve_args) # Solve the ODE with Explicit Euler

plt.plot(t, y[:,0], 'b-o', label='Explicit Euler')
plt.plot(t, np.exp(solve_args['a']*t), 'k--', label='Known')
plt.xlabel(r'$t$')
plt.ylabel(r'$y(t)$')
plt.legend(loc=2)

plt.show()


OK, so with $N = 10$ steps the numerical solution (blue dots) follows the known exponential behaviour, but is underestimating the known result (black dashed line). We know from the Taylor series truncation we made that the error in the approximation scales with the stepsize, so if we want a more accurate solution the first thing to do is to make the stepsize $h$ smaller. Let's try that.

Checking Accuracy

To see that the accuracy improves with smaller $h$, we'll solve the system for different numbers of steps: $N = 5, 10, 20, 40$ (corresponding to $h = 1, \frac{1}{2}, \tfrac{1}{4}, \tfrac{1}{8}$).


In [12]:
for N in [5, 10, 20, 40]:

    t = np.linspace(0., t_max, N+1) # Time steps
    y = ode_int_ee(exp, y_0, t, solve_args)
    plt.plot(t, y, '-o', label='%d steps'%N) 
    
plt.plot(t, np.exp(solve_args['a']*t), 'k--', label='Known')

plt.xlabel(r'$t$'), plt.ylabel(r'$y$')
plt.legend(loc=2)


Out[12]:
<matplotlib.legend.Legend at 0x108406210>

This looks good: the solution is converging on the known result as we increase the number of steps, which tells us that the method is consistent, as any useful method must be. We can check that the order of the approximation is indeed $\mathcal{O}(h^1)$ by plotting a function of the global error at $t=5$, given by $| \, y_N - e^5 \, |$, over a large range of stepsizes $h$.


In [13]:
max_N = 16
N = 2**np.arange(2, max_N) # N = 2, 4, 8, ..., 2^max_N

order_check = 1 # for visual check of the order of accuracy

y_end = np.zeros(len(N)) # array to fill with the final values
stepsize = np.zeros(len(N)) # array to fill with the stepsizes

for i, N_i in enumerate(N): # loop over different numbers of steps

    t = np.linspace(0., t_max, N_i+1)
    y_end[i] = ode_int_ee(exp, y_0, t, solve_args)[-1]
    
    stepsize[i] = t_max/N_i
    
plt.loglog(stepsize, abs(y_end - np.exp(solve_args['a']*t_max)), 
           'b-o', label='Global error')
plt.loglog(stepsize, stepsize**order_check,'k--', label=r'$h^1$')
plt.xlabel(r'$h$')
plt.legend(loc=2)


Out[13]:
<matplotlib.legend.Legend at 0x1121d6610>

We see that the slope is constant below $h \approx 0.1$, which tells us that the order is constant. We could find that order formally by fitting this logarithm, but it's good enough for now to compare the global error with the function $h^1$ (black dashed line). That the slopes are the same indicates visually that the method is $\mathcal{O}(h^1)$. (Try comparing with $h^2$ by changing order_check in the code above to $2$.)

Next

In Part 2 we'll look at a method with a higher order of accuracy: the Runge-Kutta method; we'll go beyond first-order to higher-order ODEs and solve a more interesting physical problem; and we'll look at multistep methods where more than one point is used to get to the next step.