The logistic differential equation is a diffeq of the form $$ \frac{dx}{dt} = f(x) $$ where $$ f(x) = x(1-x). $$ The leap frog method for approximating solutions to this diffeq begins by replacing the derivative $dx/dt$ by a centered difference: $$ \frac{dx}{dt} \approx \frac{x(t+\Delta t) - x(t-\Delta t)}{2\Delta t}. \tag{1} $$ If we write $x_n = x(n\Delta t)$ for the successive approximations, then the $x_n$ satisfy
\begin{equation} \frac{x_{n+1} - x _ {n-1}}{2\Delta t} = f(x_n) \implies x _ {n+1} = x _ {n-1} + 2f(x_n). \tag{2} \label{eq:leapfrog} \end{equation}If we know $x_0$ and $x_1$ then this method tells us how to find $ x_2, x_3, \dots$. Here $x_0$ should be a given as initial data. We have to somehow come up with $x_1$. Below we will use the midpoint rule to approximate $x_1$.
The symmetric difference in (1) is a better approximation of the derivative than the forward difference $$ \frac{dx}{dt} \approx \frac{ x(t+\Delta t) - x(t)}{\Delta t} $$ which is used in the derivation of Euler's method. So you would think that the leap frog method is more accurate than the Euler method. But the leap frog method turns out to be problematic. In this notebook we compare the solutions of the leap frog method with those provided by the mid point rule.
In [2]:
# To begin the simulation we import the necessary pyplot stuff
import numpy
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
Next, we define a function that returns the right hand side of the logistic ODE, and also a function that performs one step of the leapfrog method.
In [3]:
def f(x):
"""the right hand side in the logistic differential equation"""
return 3*x*(1.0-x)
function_formula="$3x-x^2$" # this string goes in the title of our figure
def leapfrog(u0,u1,f,dt):
""" Compute the solution at the next time step
by applying the leap frog method to the previous two known solution points. """
return u0 + 2*dt*f(u1)
We need the midpoint method to estimate $x_1$, and we will also use it to compare the leapfrog solution with a solution we trust.
In [4]:
def midpoint_step(u, f, dt):
"""Returns the solution at the next time-step using the midpoint method.
Parameters
----------
u : array of float
solution at the previous time-step.
f : function
function to compute the right hand-side of the system of equation.
dt : float
time-increment.
Returns
-------
u_n_plus_1 : array of float
approximate solution at the next time step.
"""
umid = u+dt*f(u)/2
return u + dt * f(umid)
At this point we begin the computation. We will compute the solution over a time interval $0\lt t\lt T$, taking $N$ steps.
We store the results of the leap frog and midpoint methods in two arrays ulp
and umdp
In [5]:
# the parameters
N=200
T=20.0
dt = T/N
u0 = 0.02 # initial value of the solution
# create arrays with indices 0, 1, 2, ..., N:
# they must have N+1 entries since we include 0.
times = numpy.linspace(0.0, T, N+1)
ulp = numpy.zeros(N+1)
umdp = numpy.zeros(N+1)
# assign initial data
ulp[0] = u0
umdp[0] = u0
In [6]:
# Compute umdp[1] and use it as u1 for the leap frog method
umdp[1] = midpoint_step(u0, f, dt)
ulp[1] = umdp[1]
# Compute the remaining solution points
for i in range(2,N+1):
umdp[i] = midpoint_step(umdp[i-1], f, dt)
ulp[i] = leapfrog(ulp[i-2], ulp[i-1], f, dt)
In [7]:
# now we plot
# visualization of the path
plt.figure(figsize=(12,6))
plt.grid(True)
plt.xlabel(r't', fontsize=18)
plt.ylabel(r'u', fontsize=18)
plt.title("Approximate solutions of $3x-x^2$. Time step: $\Delta t=$%.3f "%(dt))
plt.plot(times, ulp, 'r-', lw=1, label='Leap frog')
plt.plot(times, umdp, 'g-', lw=2, label='Mid point rule')
# uncomment the following two lines to plot the
# even entries and the odd entries in the leapfrog approximation
#plt.plot(times[0::2], ulp[0::2], 'y-', lw=3)
#plt.plot(times[1::2], ulp[1::2], 'm-', lw=3)
plt.legend(loc='upper left');
While the solution of the leapfrog method oscillates a lot, it looks like it oscillates between two nice graphs (you can see them by uncommenting two lines in the code above.)
To see what is happening, take the sequence of leap-frog values $$ \dots, x _ {n-2}, x _ {n-1}, x _ {n}, x _ {n+1}, x _ {n+2}, \dots $$ and separate the even from the odd numbered entries: let $$ p _ k = x _ {2k}, \qquad q _ k = x _ {2k+1}. $$ So, $$ x_0, x_1, x_2, x_3, x_4, \dots = p_0, q_0, p_1, q_2, p_3, \dots $$ Write the leap frog equation \eqref{eq:leapfrog} twice: once for $n$ even, and once for $n$ odd. You get \begin{align*} x _ {2k} &= x _ {2k-2} + 2\Delta t f(x _ {2k-1}) & \implies p _ k &= p _ {k-1} + 2\Delta t f(q _ {k-1})\\ x _ {2k+1} &= x _ {2k-1} + 2\Delta t f(x _ {2k}) & \implies q _ k &= q _ {k-1} + 2\Delta t f(p _ k) \end{align*} The equations for $p _ k$ and $q_k$ are almost the Euler method for the system of differential equations \begin{align*} p' &= f(q)\\ q' &= f(p) \end{align*} where $p_k\approx p(2k\Delta t)$ and $q_k\approx q(2k\Delta t)$. The leap frog method isn't solving one differential equation $x'=f(x)$, but a system of two diffeqs $p'=f(q)$, $q'=f(p)$! As long as $p=q$ the system of diffeqs reduces to the original single equation, but apparently there are points where a small difference between $p$ and $q$ gets amplified, and the leap frog method shows us a solution of $p'=f(q), q'=f(p)$ with $p\neq q$.
Play with the parameters: what happens when you double $N$? If you change the initial value?
Try different functions $f$.
In [1]:
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, "r").read())
Out[1]: