Higher order: Taylor’s series

We can go a step beyond Euler’s method keeping up to second order terms in the expansion around $x_0$. Doing so we obtain $$y(x+\Delta x)=y(x)+y'(x)\Delta x+\frac{1}{2}y''(x)(\Delta x)^2+O(\Delta x^3) $$ from ([first]) we get $$\begin{eqnarray} y'(x)&=&f(x,y), \\ y''(x)&=&\frac{df}{dx}=\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}\frac{dy}{dx}=\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y} f \end{eqnarray}$$

Substituting in ([taylor]) we obtain

$$y_{n+1}=y_n+f\Delta x+\frac{1}{2}(\Delta x)^2[\frac{\partial f}{\partial x}+f\frac{\partial f}{\partial y}]+O(\Delta x^3),$$

where all the functions and derivatives are evaluated in $(x_n,y_n)$.

Multistep or Predictor-Corrector methods

We can achieve higher accuracy by relating $y_{n+1}$ not only to $y_n$, but also to points further in the past $y_{n-1},y_{n-2},...$ To derive such formulas we can integrate exactly ([first]) to obtain: $$y_{n+1}=y_n+\int_{x_n}^{x_{n+1}}f(x,y)dx$$

The problem is that we don’t know $f(x,y)$ over the interval $(x_n,x_{n+1})$. However, we can use the values of $y$ at $x_n$ and $x_{n-1}$ to provide a linear extrapolation: $$f=\frac{(x-x_{n-1})}{\Delta x}f_n-\frac{(x-x_n)}{\Delta x} f_{n-1}+O(\Delta x^2),$$ with $f_n=f(x_n,y_n)$. Inserting into ([integral]) we obtain $$y_{n+1}=y_n+\Delta x(\frac{3}{2}f_n-\frac{1}{2}f_{n-1})+O(\Delta x^3)$$ Note that the value of $y_0$ is not sufficient information to get this algorithm started. The value of $y_1$ has to be obtained first by some other procedure, like the ones described previously. This means that the method is not "self starting".

Runge-Kutta methods

2nd order Runge-Kutta

Euler’s method rests on the idea that the slope at one point can be used to extrapolate to the next. A plausible idea to make a better estimate of the slope is to extrapolate to a point halfway across the interval, and then to use the derivative at this point to extrapolate across the whole interval. Thus,

$$\begin{eqnarray} k&=&\Delta x f(x_n,y_x), \\ y_{n+1}&=&y_n+\Delta x f(x+1/2\Delta x, y_n+1/2k) + O(\Delta x^3).\end{eqnarray}$$

It has the same accuracy as the Taylor series ([taylor]). It requires the evaluation of $f$ twice for each step.

4th order Runge-Kutta

Similar ideas can be used to derive a 3rd or 4th order Runge-Kutta method. It has been found by experience that the best balance between accuracy and computational effort is given by a fourth-order algorithm. Such a method would require evaluating $f$ four times at each step, with a local accuracy of $O(\Delta x^5)$. It can be written as follows: $$\begin{eqnarray} k_1&=&\Delta x f(x_n,y_n), \\ k_2&=&\Delta x f(x_n+1/2\Delta x,y_n+1/2k_2), \\ k_3&=&\Delta x f(x_n+1/2\Delta x,y_n+1/2k_2), \\ k_4&=&\Delta x f(x_n+\Delta x,y_n+k_3), \\ y_{n+1}&=&y_n+\frac{1}{6}(k_1+2k_2+2k_3+k_4)+O(\Delta x^5).\end{eqnarray}$$

Runge-Kutta method are self-staring, meaning that they can be used to obtain the first few iterations for a non self-starting algorithm.


In [ ]: