Ordinary Differential Equations

Some elements of this lesson come from materials developed by Matt Moelter and Jodi Christiansen for PHYS 202 at Cal Poly.


Instructions: Create a new directory called ODEs with a notebook called ODEsTour. Give it a heading 1 cell title Ordinary Differential Equations. Read this page, typing in the code in the code cells and executing them as you go.

Do not copy/paste.

Type the commands yourself to get the practice doing it. This will also slow you down so you can think about the commands and what they are doing as you type them.</font>

Save your notebook when you are done, then try the accompanying exercises.



In [ ]:
%pylab inline
import numpy as np
import matplotlib.pyplot as plt

Introduction

Differential equations relate functions of one or more variables to their derivatives. "Ordinary" differential equations refer to those functions that have only one independent variable. This is in contrast to "partial" differential equations, which may depend on several different independent variables.

There are many examples of ODEs in physics. Some of them are linear equations with closed form analytic solutions that we can solve by hand. Nonlinear equations are harder to solve. Some have solutions that can be expressed analytically in terms of series expansions or integrals, while others have no closed-form solutions and must be solved numerically. Computers and libraries of ODE solving algorithms make it possible to explore the dynamics of nonlinear equations quantitatively.

We'll explore a few interesting examples in this tour.

The Simple Pendulum as an example

Solving for the motion of a simple pendulum involves a second order differential equation that cannot be solved analytically. It is ripe to be solved numerically. Let's review the physics:

If you use Newton's second law to write the sum of the forces on the mass, $m$, you get

$\sum\vec{F} = \vec{T}+\vec{w} = m\vec{a}$

which is a vector expression that needs to be broken up into its components. Since the length of the pendulum is fixed, the acceleration in the radial direction must be zero, and we need only consider accelerations tangential to the path of the mass, $m$. Given this, it makes the most sense to use cylindrical coordinates.

$F_{radial} = T - mg\cos\theta = ma_{radial} = 0$

$F_{tangential} = -mg\sin\theta = ma_{tangential}$

The linear acceleration along the tangential direction is related to $\theta$ through the arc-length $s$ that the mass makes as it swings:

$$s = \ell\theta$$$$ v_{tangential} = \frac{ds}{dt} = \ell \frac{d\theta}{dt} $$$$ a_{tangential} = \frac{d^2s}{dt^2} = \ell \frac{d^2\theta}{dt^2} $$

which gives the following differential equation that we need to solve

$$ \frac{d^2\theta}{dt^2} = -\frac{g}{\ell}\sin\theta $$

Due to the $\sin\theta$ on the right-hand-side, this has no analytic solution. So we only have two possible paths forward:

  1. Approximate

  2. Solve numerically

Approximate solution

Very often in physics, you can learn a lot about a system by making simplifying assumptions and examining the system's behavior given these assumptions. In the case of the simple pendulum, we typically restrict the oscillations to small angles, $\theta$, which allows us to use the Maclaurin series expansion of $\sin\theta$ and ignore terms higher than order $\mathcal{O}(\theta)$. Recall

$$ \sin\theta = \theta -\frac{\theta^3}{3!} + \frac{\theta^5}{5!} - \frac{\theta^7}{7!} + ... $$

Now we can re-write our differential equation and solve it in closed form:

$$ \frac{d^2\theta}{dt^2} = -\frac{g}{\ell}\theta $$

Which has solution

$$\theta(t) = \theta_0\cos(\sqrt{g/\ell} t)$$

Let's plot it.


In [ ]:
g = 9.8 #m/s^2
ell = 0.5 #m
th0 = 20.*np.pi/180 # 20 degrees, but must use radians

t = np.arange(0.,10.+0.01,0.01)
theta_app = lambda t: th0*np.cos(np.sqrt(g/ell)*t)

plt.plot(t,theta_app(t),label='Approximate solution')
plt.xlabel(r'$t (seconds)$',fontsize=20)
plt.ylabel(r'$\theta (rad)$',fontsize=20)
plt.ylim(-1,1.5)
plt.grid()
plt.legend()
plt.show()

Basically what you'd expect. But this is only good for small angles. What if we wanted to start it out at 50 degrees? The small-angle approximation will not give us very accurate results. We have to try something else.

Numerical Solution

To solve this equation numerically using Python (or MATLAB), we need to do a little mathematical gymnastics. We need to convert our single second-order differential equation into two first order equations. This can be a tricky concept to grasp, so take a moment to really think about it. You'll have to keep all the variables and initial conditions straight in order to obtain a solution.

Generally, we use the variable $\omega = d\theta/dt $ to represent the angular velocity. It is also sometimes used for angular frequency, so be sure you understand the difference.

Let's rewrite our second order equation in terms of a first-order derivative of $\omega$

$$ \frac{d^2\theta}{dt^2} = \frac{d\omega}{dt} = -\frac{g}{\ell} \sin\theta $$

So now we have converted a single second-order differential equation into two first-order equations:

$$\left[\cfrac{d\theta}{dt} = \omega, \hspace{5mm} \cfrac{d\omega}{dt} = -\cfrac{g}{\ell}\sin\theta\right]$$

Notice that the first of these equations is really just a definition. To get $\theta(t)$ and $\omega(t)$ from this set of equations, we have to integrate them numerically.

The function $\theta(t)$ and its derivative $\omega(t)$ will be elements of an array that we'll call vals with $\theta(t)$ as the first element vals[0] and $\omega(t)$ as the second element vals[1].

We will set up a function deriv to compute the derivatives of $\theta(t)$ and $\omega(t)$ according to this system of equations as a function of time and then use scipy's odeint function to integrate them for us to get back the solutions $\theta(t)$ and $\omega(t)$.

Here's the function that returns the first derivative of each element in the array:


In [ ]:
# return derivatives of the array vals
def deriv(vals,t):  
    g = 9.8 #m/s^2
    ell = 0.5 #m
    theta = vals[0] #current value for theta
    omega = vals[1] #current value for omega
    dtheta = omega
    domega = -(g/ell)*np.sin(theta)
    return np.array([ dtheta, domega ]) #return derivatives of theta, omega

Note, this same function can be written more compactly as

def deriv(vals,t):  # return derivatives of the array vals
        g = 9.8 #m/s^2
        ell = 0.5 #m
        return np.array([ vals[1], -(g/ell)*np.sin(vals[0]) ])

But writing it the way we did in the code cell makes it very explicit what each step involves. The odeint solver will take the derivative function, along with an array of current values for $[\theta,\omega]$ and return the result of 1) differentiating them via deriv, which produces $[d\theta/dt,d\omega/dt]$ and then 2) integrating $[d\theta/dt,d\omega/dt]$ at each time step, to give back $[\theta(t),\omega(t)]$.

Many numerical ODE solvers do it exactly the same way (e.g. MATLAB).

Now we import odeint from scipy.integrate, set up our initial conditions, and solve:


In [ ]:
from scipy.integrate import odeint

#Try the help for more info
#help(odeint)

In [ ]:
t = np.arange(0.,10.+0.01,0.01)
th0 = 20.*np.pi/180 # 20 degrees, but must use radians
omega0 = 0. #starting from rest, omega = 0

th_init = np.array([th0, omega0]) # initial values

#pass the function deriv to the solver with initial values and integration variable
theta = odeint(deriv,th_init,t)

What do you think the shape of theta is? Try to figure it out and then check your answer.


In [ ]:
print theta.shape

Did you guess right?

Note that odeint returns the values of both the function theta[0] = $\theta(t)$ and its derivative theta[1] = $\omega(t)$ at each time step. So we now have the solutions to both of our first-order equations. This is kind of nice because in one move we have obtained both the position $\theta(t)$ and angular velocity $\omega(t)$ as a function of time. We could use/plot either or both. Let's plot the position and compare it to our previous approximate result.


In [ ]:
plt.plot(t,theta_app(t),'b-',label="Approximate solution")
plt.plot(t,theta[:,0],'r--',label="Numerical solution")
plt.xlabel(r'$t (seconds)$',fontsize=20)
plt.ylabel(r'$\theta (rad)$',fontsize=20)
plt.ylim(-1,1.5)
plt.grid()
plt.legend()
plt.show()

It looks like the approximate solution does a decent job for the first few cycles, but starts to diverge later on. What if we start it off at a bigger initial angle?


In [ ]:
th0 = 50.*np.pi/180 # 50 degrees

th_init = np.array([th0, omega0]) # initial values
theta = odeint(deriv,th_init,t)

plt.plot(t,theta_app(t),'b-',label="Approximate solution")
plt.plot(t,theta[:,0],'r--',label="Numerical solution")
plt.xlabel(r'$t (seconds)$',fontsize=20)
plt.ylabel(r'$\theta (rad)$',fontsize=20)
plt.ylim(-1.,1.5)
plt.grid()
plt.legend()
plt.show()

Much more significant deviations appear for large angles. To get a good estimate of the motion of a simple pendulum over a wide range of motion, numerical solutions are the way to go. The nonlinear behavior will lead to some surprising physical results. You will get the chance to explore this behavior in the exercises. Be prepared to have your mind blown.

Making physics come alive

Before we move on, let's re-formulate our solution using the iPython interact widget. It will allow us to fully explore the dynamics of our numerical solution without having to code parameters by hand. You will get the chance to do a similar task as part of the accompanying exercises, so this will be a useful example to get you started.


In [ ]:
from IPython.html.widgets import interact, interactive

In [ ]:
def derivNLP(vals,t,g,ell):
    theta = vals[0]
    omega = vals[1]
    dtheta = omega
    domega = -(g/ell)*np.sin(theta)
    return array([ dtheta, domega ])


def nonlinear_pendulum(th0=20.,w0=0.,ell=0.5,g=9.8):

    t = np.arange(0.,10.+0.01,0.01)

    #Approximate analytic solution for comparison
    approx = th0*np.pi/180*np.cos(np.sqrt(g/ell)*t)
  
    #convert degrees to radians
    initVals = np.array([th0*np.pi/180,w0])

    #pass the function deriv to the solver with initial values and integration variable
    theta = odeint(derivNLP,initVals,t,args=(g,ell)) 
    #theta[:,0] are the positions
    #theta[:,1] are the angular velocities
    
    #Plot the results
    plt.plot(t,approx,label='Approximate solution')
    plt.plot(t,theta[:,0],label='Numerical solution')
    plt.xlabel(r'$t (seconds)$',fontsize=20)
    plt.ylabel(r'$\theta (rad)$',fontsize=20)
    #plt.ylim(-2.,2.)
    plt.grid()
    plt.legend()
    plt.show()

    
v = interact(nonlinear_pendulum,th0=(0.,90.),w0=(0.,10.),ell=(0.1,1.0),g=(1.0,20.0))

Now use the sliders to explore this system. What effect does changing the parameters have on the shape of the plot?


Breaking into the black box

In order to be able to solve for the motion of the simple pendulum using odeint, we had to convert the single second-order ODE into two first order ODEs. Let's peer inside scipy's black box to understand why.

Euler's Method

The most basic numerical algorithm for computing the solutions to ODEs was invented by Leonhard Euler in the 18th century - way before microprocessors. Back in those days, the solutions were cranked out by human computers.

Free-fall as an example

Let's start by thinking about the familiar free-fall problem where velocity is the derivative of position, and acceleration is the derivative of velocity. We can write this as a single second-order differential equation:

$$ a(t) = \frac{d^2y}{dt^2} $$

OR as two first order differential equations:

$$ a(t) = \frac{dv}{dt}, \hspace{1cm} v(t) = \frac{dy}{dt}. $$

Once the problem is first order, we can integrate instead of differentiating. Integrating the acceleration tells us the velocity at some later time if we know it at some earlier time; we can derive the position in a similar stepwise fashion from the velocity.

Starting at a time $t_0$ at some known position $y(t_0)$ and with some known velocity $v(t_0)$, we can calculate the change in velocity in going from a time $t_0$ to a time $t_1 = t_0 + \Delta t$

$$ v(t_1) = v(t_0) + a(t_0)\Delta t $$

and for the position:

$$ y(t_1) = y(t_0) + v(t_0)\Delta t $$

This is the Euler Method. It is based on a tangent line approximation to the function in question.

Let's denote $v(t_1)$ as $v_1$ and so on. Applying the method successively on subsequent time steps up to $n$ we can write $t_{i+1} = t_i + \Delta t$ and $t_{i} = t_0 + i\Delta t$, so that

$$ v_{i+1} = v_i + a_i\Delta t $$$$ y_{i+1} = y_i + v_i\Delta t $$

Note that the velocity/position depend on the previous velocity/position, the acceleration/velocity and the time interval. So the acceleration need not be constant.

Let's say we want to numerically calculate the velocity and position of a mass falling freely under the force of gravity. Near the surface of the earth, we know that the acceleration of gravity is constant, $a(t)$ = $-g$ = -9.8 m/s$^2$ . We can compute the new velocity from the previous velocity and the acceleration.

We can repeat this calculation for any number of time intervals, e.g. from $t_1$ to $t_2$, and then again for the next time interval, $t_2$ to $t_3$, and so on. In this way the solution for $v(t)$ and $y(t)$ can be generated as a function of time, at a discrete set of points.

This problem is especially simple because the acceleration, $a$, which is what we use to generate $v$ and $y$, is the same for each time step. In more complicated problems, the acceleration may change at each step because the force depends on either the position or velocity, or both.

Here is some simple code that implement's Euler's method for free-fall:


In [ ]:
def euler_freefall():
    tmax = 10 # maximum time
    dt = 0.1 # time step
    a = -9.81 #g in m/s^2
    nsteps = int(tmax/dt)
    t = np.zeros(nsteps)
    v = np.zeros(nsteps)
    y = np.zeros(nsteps)
 
    # set initial conditions, redundant in this example
    t[0] = 0.
    v[0] = 0.

    for i in range(len(t)-1):
        t[i+1] = t[i] + dt
        v[i+1] = v[i] + dt * a
        y[i+1] = y[i] + dt * v[i]
        
    return t,v,y

te,ve,ye = EulerFreefall()

Now we can plot the position and velocity as a function of time:


In [ ]:
plt.plot(te,ye,"r-",label="y(t)")
plt.plot(te,ve,"b-",label="v(t)")
plt.xlabel("time (s)",fontsize=20)
plt.ylabel("y(t),v(t)",fontsize=20)
plt.grid()
plt.legend(loc="best", fontsize=20)
plt.show()

The shapes of the curves should make perfect sense to you given what you know about the equations of motion for objects undergoing constant acceleration. If they don't, take a moment to think about it and convince yourself why they look like they do. If it's still not clear, review your basic kinematics from PHYS 141.

Now let's try varying the problem a bit.

Exploring the parameter space

Consider a ball starting from $y$=0 with an initial velocity upwards of 46 m/s.

We can use Euler's method to calculate the velocity and position of the ball as a function of time. Let's set this up with an interact object so we can explore the dynamics for different initial conditions without writing lots of code.

To help us evaluate how well Euler's method works, let's compare the numerical result to an analytical one. The maximum height of the ball will be given by $y(t_{max})$ when $v(t_{max}) = (v_0 + at_{max}) = 0$. Solving for $t_{max}$ gives $t_{max} = -v_0/a$. We'll compare the values for $y_{max}$.


In [ ]:
def show_freefall(tmax=10.,dt=0.1,t0=0.,v0=46.,y0=0.):
    a = -9.81 #g in m/s^2

    #Set up the arrays based on the time parameters
    t = np.zeros(int(tmax/dt))
    v = np.zeros(int(tmax/dt))
    y = np.zeros(int(tmax/dt))

    #set up initial conditions from the parameters
    t[0] = t0
    v[0] = v0
    y[0] = y0

    #Apply Euler's method to solve
    for i in range(len(t)-1):
        t[i+1] = t[i] + dt
        v[i+1] = v[i] + dt * a
        y[i+1] = y[i] + dt * v[i]
    
    #Compare to the analytical result for ymax
    tmax = -v0/a
    ymax = y0 + v0*tmax + 0.5*a*tmax**2
    percentDiff = 100.*np.abs(y.max()-ymax)/ymax
    print "ymax percentDiff = %.2f"%percentDiff

    #plot it
    plt.plot(t,y,"r-",label="y(t)")
    plt.plot(t,v,"b-",label="v(t)")
    plt.xlabel("time (s)",fontsize=20)
    plt.ylabel("y(t),v(t)",fontsize=20)
    plt.xlim(0,20.)
    plt.ylim(-100.,600.)
    plt.grid()
    plt.legend(loc="upper left", bbox_to_anchor=(1,1), fontsize=20)
    plt.show()

In [ ]:
v4 = interact(show_freefall,tmax=(1.,20.),dt=(0.01,1.,0.01),t0=(0.,20.),v0=(10.,100.),y0=(0.,10.0))
  • How does the percent difference in $y_{max}$ change as a function of the size of the timesteps?

  • For what value of the timestep does the percent difference just barely exceed 10%?

  • Set the timestep so that the percent difference is approximately 1%. For what initial velocity $v_0$ does the ball return to earth at $t$ = 15 s?


Euler-Cromer Method

Varying the timesteps let's us improve the accuracy of our result, but to do an even better job, we could try modifying the method for calculating the new positions and velocities. Cromer added the idea of calculating the new position using the just calculated new velocity (at the end of the interval) to Euler’s method, creating what is known as the Euler-Cromer Method.

$$ v_{i+1} = v_i + a_i\Delta t $$$$ y_{i+1} = y_i + v_{i+1}\Delta t $$

Did you catch the difference in the last term of the second line? Let's implement it.


In [ ]:
def euler_cromer_freefall():
    tmax = 10 # maximum time
    dt = 0.1 # time step
    a = -9.81 #g in m/s^2
    nsteps = int(tmax/dt)
    t = np.zeros(nsteps)
    v = np.zeros(nsteps)
    y = np.zeros(nsteps)
 
    # set initial conditions, redundant in this example
    t[0] = 0.
    v[0] = 0.

    for i in range(len(t)-1):
        t[i+1] = t[i] + dt
        v[i+1] = v[i] + dt * a
        y[i+1] = y[i] + dt * v[i+1] #use just calculated v value to compute y
        
    return t,v,y

In [ ]:
tec,vec,yec = euler_cromer_freefall()

In [ ]:
plt.plot(tec,yec,"r-",label="y(t)")
plt.plot(tec,vec,"b-",label="v(t)")
plt.xlabel("time (s)",fontsize=20)
plt.ylabel("y(t),v(t)",fontsize=20)
plt.grid()
plt.title("Euler-Cromer")
plt.legend(loc="best", fontsize=20)
plt.show()

How different is the result from Euler's method? We'll explore the variation between these methods in the accompanying exercises.

Midpoint Method

The midpoint method is even more accurate than the previous two because it uses the slope at the midpoint of the time interval to estimate the forward progress of the velocity.

$$ v_{i+1} = v_i + a_i\Delta t $$$$ y_{i+1} = y_i + \frac{1}{2}(v_{i}+v_{i+1})\Delta t $$

These approaches would give an exact solution if the time interval $\Delta t$ we used were infinitesimal in size, because then we would be doing the exact integrals that determine the velocity and the position.

Another way to say this is that values we use in the Euler, Euler-Cromer and Midpoint method equations equal the instantaneous values in the limit of an infinitesimal time interval. But there's a problem with trying to do this: we'd have an infinite number of data points where we'd have to calculate $v$ and $y$!

So we want to use a time interval that is short enough to approximate the answer adequately, but long enough that we don't have an unreasonable number of data points.

So how short is short enough? That depends on how fast $v$ and $y$ are changing. A simple test that you can try when programming is to change $\Delta t$ (such as we did with the interact example above) and see if your solution changes at all.

The Runge-Kutta Family of Methods

The Runge-Kutta methods are a series of numerical methods for solving differential equations and systems of differential equations of which Euler and the Midpoint methods are a subset. The more refined the approximation procedure used at each step determines the "order" of the method. Euler is an order 1 method, Midpoint is order 2, and the most commonly used Runge-Kutta Method, RK4, is of order 4. The distinguishing characteristic of each order method is the error per step associated with it.

RK4 takes the form

$$ y_{i+1} = y_i + \frac{\Delta t}{6} (k_1 + 2k_2 + 2k_3 + k_4) $$

where

$$k_1 = f(y_i, t_i) $$$$k_2 = f(y_i + \frac{\Delta t}{2}k_1, t_i + \frac{1}{2}\Delta t) $$$$k_3 = f(y_i + \frac{\Delta t}{2}k_2, t_i + \frac{1}{2}\Delta t) $$$$k_4 = f(y_i + \Delta tk_3, t_i + \Delta t) $$

for an initial value problem specified by $\cfrac{dy}{dt} = f(y,t)$, with $y(t_0) = y_0$.

$y_{i+1}$ is the RK4 approximation of $y(t_{i+1})$, and the next value ($y_{i+1}$) is determined by the present value ($y_i$) plus the weighted average of four increments, where each increment is the product of the size of the interval, $\Delta t$, and an estimated slope specified by function $f$ on the right-hand side of the differential equation.

In averaging the four increments, greater weight is given to the increments at the midpoint. The weights are chosen such that if $f$ is independent of $y$, the problem is equivalent to a simple integral and RK4 reduces to Simpson's Rule.

Many pre-packaged ODE solvers use variations of RK4, often with adaptive techniques. MATLAB's ode45 is one example. It uses simultaneously fourth and fifth order RK formulas to make error estimates and adjust the time step accordingly.

What's inside scipy.integrate.odeint?

So after exploring all of these different methods, you might wonder which one is actually used in the built-in library that comes with scipy.

scipy's odeint wraps up a Fortran 77 library called ODEPACK. The original code is from the early 1980's but much of it was repackaged at the beginning of the 21st century. Its most basic algorithm is called LSODE (Livermore Solver for Ordinary Differential Equations). odeint uses the adaptive version of LSODE, called LSODA that automatically switches between "stiff" and "non-stiff" integration routines, depending on the characteristics of the solution, and does adaptive time-stepping to achieve a desired level of solution accuracy.

Fortran???

It is not uncommon for modern software libraries to have ancient foundations. Writing "wrappers" for well-tested, tried and true functions is often more efficient than starting from scratch in a different language. Fortunately, you do not need to know any Fortran to be able to use odeint. The scipy developers did the hard work for us. We get the speed, power and reliability of the ODEPACK library with the ease of use of the Python programming interface. It's a win-win.


One last example - coupled differential equations

There are many examples of problems that involve entangled differential equations. One such example is the so-called Lotka-Volterra "Predator-Prey" model.

A predator-prey model describes the interactions of two species in an ecosystem. For this example, let’s consider lions and zebras. If we consider a group of lions and zebras that don’t interact with any species but each other, we can write a system of equations to govern their populations as follows:

$$ \frac{dZ}{dt} = \alpha Z - \beta ZL,\\ \frac{dL}{dt} = \epsilon\beta ZL - \gamma L, $$

where $L$ and $Z$ are the populations of lions and zebras, respectively, and the equations represent the rate of change of those populations given the rates of birth and death of each species.

The death rates are controlled by the rate of lions consuming zebras and starving to death from lack of (zebra) food. The initial conditions for the given situation are the starting populations:

$$ L(0) = L_0,\\ Z(0) = Z_0, $$

and the parameters are

  • $\alpha$: growth rate of zebras in the absence of lions to prey on them (units of [1/years])
  • $\gamma$: death rate of lions in the absence of zebras to eat (units of [1/years])
  • $\beta$: death rate of zebras per encounter with a lion (units of [1/lions/years])
  • $\epsilon$: efficiency of producing lions from consumed zebras (units of [lions/zebras])

In the absence of lions, $L$ = 0, the zebra population grows exponentially,

$$ \frac{dZ}{dt} = \alpha Z\\ Z(t) = Z_0e^{\alpha t} $$

Likewise, in the absence of zebras, the lion population decays exponentially because there is nothing to eat!

$$ \frac{dL}{dt} = -\gamma L\\ L(t) = L_0e^{-\gamma t} $$

When our ecosystem starts off with a population of both lions and zebras, the situation is more complicated. We have a set of coupled differential equations that need to be solved numerically.

Given the following parameters, we could solve this system of equations to determine the maximum and minimum lion and zebra populations over the course of 50 years if the initial populations are $Z_0$ = 200 and $L_0$ = 50.

  • $\alpha$ = 0.7 per year
  • $\gamma$ = 0.5 per year
  • $\beta$ = 0.007 per lion per year
  • $\epsilon$ = 0.1 lions per zebra

In [ ]:
# we want to solve the system dy/dt = f(y, t)
def derivPP(y,t):
    
    alpha = 0.7
    gamma = 0.5
    beta = 0.007
    eps = 0.1
    
    Zi = y[0]
    Li = y[1]
    
    dZdt = alpha*Zi - beta*Zi*Li
    dLdt = eps*beta*Zi*Li - gamma*Li
    
    return np.array([ dZdt, dLdt ])

In [ ]:
#50 year span
t = np.arange(0.,50.+0.1,0.1)

#initial conditions
Z0 = 200
L0 = 50

#pass the function deriv to the solver with initial values and integration variable
result = odeint(derivPP,np.array([Z0,L0]),t)

Z = result[:,0]
L = result[:,1]

plt.plot(t,Z,label="Zebra population")
plt.plot(t,L,label="Lion population")
plt.xlabel("time (years)")
plt.ylabel("Population")
plt.legend()
plt.show()

The cyclic pattern in their populations reflects the competition between lions and zebras. When the lion population declines, the zebra population explodes. As long as there are lions left, they can multiply until their population reaches a critical mass and the zebra population crashes.

Another way we can explore how the populations influence each other is with a parametric phase plot. Each cycle forms a closed loop. By varying the initial conditions, one can explore the interplay between the populations.


In [ ]:
res1 = odeint(derivPP,np.array([100,50]),t)
res2 = odeint(derivPP,np.array([300,50]),t)
plt.plot(res1[:,0],res1[:,1],label="Z$_0$ = 100, L$_0$ = 50")
plt.plot(Z,L,label="Z$_0$ = 200, L$_0$ = 50")
plt.plot(res2[:,0],res2[:,1],label="Z$_0$ = 300, L$_0$ = 50")
plt.xlabel("Zebras",fontsize=20)
plt.ylabel("Lions",fontsize=20)
plt.legend()
plt.show()

The pathlength of the curves is correlated to the length of a cycle. More zebras means shorter cycles. Less zebras means longer cycles. How might these curves change if we varied $\alpha$, $\beta$, $\gamma$, and $\epsilon$? And what would the results mean? There are so many parameters and variations to explore, you could spend an entire career on it. And some people do just that!

A fun variation on the Lotka-Volterra model is the Zombie Apocalypse model. That one is a system of three equations and 5 parameters that accounts for both death and the conversion of people to zombies in the population balance.


All content is under a modified MIT License, and can be freely used and adapted. See the full license text here.