*The phugoid model of glider flight* has been such a fun problem to showcase the power of numerical solution of differential equations, we thought you'd enjoy a bonus notebook. The previous lessons were:

- Phugoid motion —Lays the groundwork for our fun problem, with some context, a little history and a description of the physics of phugoids: curves representing the trajectory of a glider exchanging potential and kinetic energy, with no drag.
- Phugoid oscillation —Develops the simple harmonic motion of an aircraft experiencing a small perturbation from the horizontal trajectory: our opportunity to introduce Euler's method, and study its convergence via an exact solution.
- Full phugoid motion —The full model takes into account the force of drag and results in a system of two nonlinear equations. We obtain the trajectories using Euler's method in vectorized form, introduce grid-convergence analysis and finish with the paper-airplane challenge!

That is a fantastic foundation for numerical methods. It's a good time to complement it with some theory: the first screencast of the course uses Taylor series to show that *Euler's method is a first-order method*, and we also show you graphical interpretations. Many problems require a more accurate method, though: second order or higher. Among the most popular higher-order methods that we can mention are the *Runge-Kutta methods*, developed around 1900: more than 100 years after Euler published his book containing the method now named after him!

*modified* Euler method, which achieves second order.

```
In [2]:
```from IPython.display import YouTubeVideo
YouTubeVideo('6i6qhqDCViA')

```
Out[2]:
```

The notebook on phugoid oscillation (lesson 2) included a study of the accuracy obtained with Euler's method, using the exact solution for the simple harmonic motion. We made a *convergence plot* and saw that as $\Delta t$ gets smaller, the error also gets smaller.

We could have drawn a line with a slope equal to 1 on that log-log plot, and you would have seen that it was parallel to the convergence line. A slope equal to 1 on a log-log convergence plot is an indication that we have a first-order method: the error scales as ${\mathcal O}(\Delta t)$.

In lesson 3, using the full phugoid model (which is nonlinear and does not have an exact solution), we did a *grid-convergence study* with three different grids, and obtained the *observed* order of convergence—it was very close to 1, indicating a slope of 1 on a log-log plot.

Another way to look at an ${\mathcal O}(\Delta t)$ method is to say that the error scales *linearly* with the step size, or that they are proportional:

where $e$ stands for the error. To get more accuracy, we could use a *second-order* method, in which the error is ${\mathcal O}(\Delta t^2)$. In general, we say that a method is of order $p$ when the error is proportional to $(\Delta t)^p$.

In the screencast titled "Euler's method is a first-order method," we used a graphical interpretation to get an idea for improving it: by estimating an intermediate point, like the **midpoint**, we can get a better approximation of the area under the curve of $u^\prime$. The scheme has two steps and is written as:

This method is known as the *explicit midpoint method* or the *modified Euler method*, and it is a second-order method. Notice that we had to apply the right-hand side, $~f(u)$, twice. This idea can be extended: we could imagine estimating additional points between $u_{n}$ and $u_{n+1}$ and evaulating $~f(u)$ at the intermediate points to get higher accuracy—that's the idea behind Runge-Kutta methods.

In the modified Euler method, we improve the accuracy over Euler's method by evaluating the right-hand side of the differential equation at an intermediate point: the midpoint. The same idea can be applied again, and the function $f(u)$ can be evaluated at more intermediate points, improving the accuracy even more. This is the basis of the famous *Runge-Kutta (RK) methods*, going back to Carl Runge and Martin Kutta. The modified Euler method corresponds to *second-order* Runge-Kutta.

Here's a bit of historical coincidence that will blow your mind: Carl Runge's daughter Iris—an accomplished applied mathematician in her own right—worked assiduously over the summer of 1909 to translate Lanchester's *"Aerodonetics."* She also reproduced his graphical method to draw the phugoid curves (Tobies, 2012).

Let's compute the motion of a glider under the full phugoid model using the second-order Runge-Kutta method. We'll build on the *paper airplane challenge* of lesson 3 now, and look for the horizontal distance that the plane travels until the moment it touches the ground.

As usual, let's start by importing the libraries and modules that we need, and setting up the model parameters. We also set some default plotting formats using the `rcParams`

module.

```
In [3]:
```from math import sin, cos, log
import numpy
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16

*What do you think will happen if you make $L/D$ higher?*

```
In [4]:
```# model parameters:
g = 9.8 # gravity in m s^{-2}
v_t = 4.9 # trim velocity in m s^{-1}
C_D = 1/5.0 # drag coefficient --- or D/L if C_L=1
C_L = 1.0 # for convenience, use C_L = 1
### set initial conditions ###
v0 = 6.5 # start at the trim velocity (or add a delta)
theta0 = -0.1 # initial angle of trajectory
x0 = 0.0 # horizotal position is arbitrary
y0 = 2.0 # initial altitude

Among the initial parameters that we suggest for your first experiment, we are starting with a velocity a little higher than the trim velocity, launch the paper airplane with a negative initial angle, and take the initial height to be 2 meters—all sound like reasonable choices.

Now, we can define a few functions to carry out the computation:

```
In [5]:
```def f(u):
"""Returns the right-hand side of the phugoid system of equations.
Parameters
----------
u : array of float
array containing the solution at time n.
Returns
-------
dudt : array of float
array containing the RHS given u.
"""
v = u[0]
theta = u[1]
x = u[2]
y = u[3]
return numpy.array([-g*sin(theta) - C_D/C_L*g/v_t**2*v**2,
-g*cos(theta)/v + g/v_t**2*v,
v*cos(theta),
v*sin(theta)])
def euler_step(u, f, dt):
"""Returns the solution at the next time-step using Euler's method.
Parameters
----------
u : array of float
solution at the previous time-step.
f : function
function to compute the right hand-side of the system of equation.
dt : float
time-increment.
Returns
-------
u_n_plus_1 : array of float
approximate solution at the next time step.
"""
return u + dt * f(u)
def get_diffgrid(u_current, u_fine, dt):
"""Returns the difference between one grid and the fine one using L-1 norm.
Parameters
----------
u_current : array of float
solution on the current grid.
u_finest : array of float
solution on the fine grid.
dt : float
time-increment on the current grid.
Returns
-------
diffgrid : float
difference computed in the L-1 norm.
"""
N_current = len(u_current[:,0])
N_fine = len(u_fine[:,0])
grid_size_ratio = numpy.ceil(N_fine/float(N_current))
diffgrid = dt * numpy.sum( numpy.abs(\
u_current[:,2]- u_fine[::grid_size_ratio,2]))
return diffgrid

`rk2_step()`

that computes the next time step using the *modified Euler* method of equations $(1)$ and $(2)$, above, otherwise known as 2nd-order Runge-Kutta or RK2. This function will be called over and over again within the time loop.

```
In [6]:
```def rk2_step(u, f, dt):
"""Returns the solution at the next time-step using 2nd-order Runge-Kutta.
Parameters
----------
u : array of float
solution at the previous time-step.
f : function
function to compute the right hand-side of the system of equation.
dt : float
time-increment.
Returns
-------
u_n_plus_1 : array of float
solution at the next time step.
"""
u_star = u + 0.5*dt*f(u)
return u + dt*f(u_star)

*both* Euler's method and 2nd-order Runge-Kutta to get a solution, to compare the two.

```
In [7]:
```# set time-increment and discretize the time
T = 15.0 # final time
dt = 0.01 # set time-increment
N = int(T/dt) + 1 # number of time-steps
# set initial conditions
u_euler = numpy.empty((N, 4))
u_rk2 = numpy.empty((N, 4))
# initialize the array containing the solution for each time-step
u_euler[0] = numpy.array([v0, theta0, x0, y0])
u_rk2[0] = numpy.array([v0, theta0, x0, y0])
# use a for loop to call the function rk2_step()
for n in range(N-1):
u_euler[n+1] = euler_step(u_euler[n], f, dt)
u_rk2[n+1] = rk2_step(u_rk2[n], f, dt)

```
In [8]:
```x_euler = u_euler[:,2]
y_euler = u_euler[:,3]
x_rk2 = u_rk2[:,2]
y_rk2 = u_rk2[:,3]

Negative values of $y$ don't have any physical meaning: the glider would have hit the ground by then! To find out if there are any negative $y$ values we can use the handy function `numpy.where`

. This function returns the **indices** of the elements in an array that match a given condition. For example, `numpy.where(y_euler<0)[0]`

gives an array of the indices `i`

where `y_euler[i]<0`

. If no elements of the array match the condition, the array of indices comes out empty.

From the physical problem, we know that once there is one negative value, the glider has hit the ground and all the remaining time-steps are unphysical. Therefore, we are interested in finding the *first* index where the condition applies, given by `numpy.where(y_euler<0)[0][0]`

—do read the documentation of the function if you need to!

```
In [9]:
```# get the index of element of y where altitude becomes negative
idx_negative_euler = numpy.where(y_euler<0.0)[0]
if len(idx_negative_euler)==0:
idx_ground_euler = N-1
print ('Euler integration has not touched ground yet!')
else:
idx_ground_euler = idx_negative_euler[0]
idx_negative_rk2 = numpy.where(y_rk2<0.0)[0]
if len(idx_negative_rk2)==0:
idx_ground_rk2 = N-1
print ('Runge-Kutta integration has not touched ground yet!')
else:
idx_ground_rk2 = idx_negative_rk2[0]

`numpy.allclose`

. This function compares each element of two arrays and returns `True`

if each comparison is within some relative tolerance. Here, we use the default tolerance: $10^{-5}$.

```
In [10]:
```# check to see if the paths match
print('Are the x-values close? {}'.format(numpy.allclose(x_euler, x_rk2)))
print('Are the y-values close? {}'.format(numpy.allclose(y_euler, y_rk2)))

```
```

```
In [11]:
```# plot the glider path
plt.figure(figsize=(10,6))
plt.subplot(121)
plt.grid(True)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.plot(x_euler[:idx_ground_euler], y_euler[:idx_ground_euler], 'k-', label='Euler')
plt.plot(x_rk2[:idx_ground_rk2], y_rk2[:idx_ground_rk2], 'r--', label='RK2')
plt.title('distance traveled: {:.3f}'.format(x_rk2[idx_ground_rk2-1]))
plt.legend();
# Let's take a closer look!
plt.subplot(122)
plt.grid(True)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.plot(x_euler, y_euler, 'k-', label='Euler')
plt.plot(x_rk2, y_rk2, 'r--', label='RK2')
plt.xlim(0,5)
plt.ylim(1.8,2.5);

```
```

Just like in Lesson 3, we want to do a grid-convergence study with RK2, to see if we indeed observe the expected rate of convergence. It is always an important step in a numerical solution to investigate whether the method is behaving the way we expect it to: this needs to be confirmed experimentally for every new problem we solve and for every new method we apply!

In the code below, a `for`

-loop computes the solution on different time grids, with the coarsest and finest grid differing by 100x. We can use the difference between solutions to investigate convergence, as before.

```
In [12]:
```# use a for-loop to compute the solution on different grids
dt_values = numpy.array([0.1, 0.05, 0.01, 0.005, 0.001])
u_values = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for i, dt in enumerate(dt_values):
N = int(T/dt)+1 # number of time-steps
### discretize the time t ###
t = numpy.linspace(0.0, T, N)
# initialize the array containing the solution for each time-step
u = numpy.empty((N, 4))
u[0] = numpy.array([v0, theta0, x0, y0])
# time loop
for n in range(N-1):
u[n+1] = rk2_step(u[n], f, dt)
# store the value of u related to one grid
u_values[i] = u

```
In [13]:
```# compute diffgrid
diffgrid = numpy.empty_like(dt_values)
for i, dt in enumerate(dt_values):
diffgrid[i] = get_diffgrid(u_values[i], u_values[-1], dt)

And now we plot!

```
In [14]:
```# plot using the matplotlib function loglog()
plt.figure(figsize=(6,6))
plt.grid(True)
plt.xlabel(r'$\Delta t$', fontsize=18)
plt.ylabel(r'$L_1$-norm of the grid differences', fontsize=18)
plt.xlim(1e-4,1)
plt.ylim(1e-4,1)
plt.axis('equal')
plt.loglog(dt_values[:-1], diffgrid[:-1], color='k', ls='--', lw=2, marker='o');

```
```

This is looking good! The difference relative to our fine-grid solution is decreasing with the mesh size at a faster rate than in Lesson 3, but *how much faster?* When we computed the observed order of convergence with Euler's method, we got a value close to 1—it's a first-order method. Can you guess what we'll get now with RK2?

To compute the observed order of convergence, we use three grid resolutions that are refined at a constant rate, in this case $r=2$.

```
In [15]:
```# check convergence rate
r = 2
h = 0.001
dt_values = numpy.array([h, r*h, r**2*h])
u_values = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for i, dt in enumerate(dt_values):
N = int(T/dt)+1 # number of time-steps
### discretize the time t ###
t = numpy.linspace(0.0, T, N)
# initialize the array containing the solution for each time-step
u = numpy.empty((N, 4))
u[0] = numpy.array([v0, theta0, x0, y0])
# time loop
for n in range(N-1):
### call rk2_step() ###
u[n+1] = rk2_step(u[n], f, dt)
# store the value of u related to one grid
u_values[i] = u
# calculate the order of convergence
alpha = (log(get_diffgrid(u_values[2], u_values[1], dt_values[2]))
- log(get_diffgrid(u_values[1], u_values[0], dt_values[1]))) / log(r)
print('The order of convergence is alpha = {:.3f}'.format(alpha))

```
```

*time* the calculation with Python, and compare the runtimes.

The screencast *"Euler's method is a first-order method"* motivated graphically an idea to get increased accuracy: using intermediate points between $u_{n}$ and $u_{n+1}$ and evaluating the right-hand side of the differential equation at those intermediate points. The idea is to somehow get a better approximation using more data from the function $f(u)$.

Another way to bring more information about $f(u)$ into the numerical solution is to look at time data $t\lt t_{n}$. For example, we can involve in the calculation of the solution $u_{n+1}$ the known solution at $u_{n-1}$, in addition to $u_{n}$. Schemes that use this idea are called *multi-step methods*.

A classical multi-step method achieves second order by applying a *centered difference* approximation of the derivative $u'$:

Isolate the future value of the solution $u_{n+1}$ and apply the differential equation $u'=f(u)$, to get the following formula for this method:

$$ u_{n+1} = u_{n-1} + 2\Delta t \, f(u_n),$$This scheme is known as the **leapfrog method**. Notice that it is using the right-hand side of the differential equation, $f(u)$, evaluated at the *midpoint* between $u_{n-1}$ and $u_{n+1}$, where the time interval between these two solutions is $2\Delta t$. Why is it called "leapfrog"? If you imagine for a moment all of the *even* indices $n$ of the numerical solution, you notice that these solution values are computed using the slope estimated from *odd* values $n$, and vice-versa.

Let's define a function that computes the numerical solution using the leapfrog method:

```
In [16]:
```def leapfrog_step(unm1, u, f, dt):
"""Returns the solution time-step n+1) using Euler's method.
Parameters
----------
unm1 : array of float
solution at time-step n-1.
u : array of float
solution at time-step n.
f : function
function to compute the right hand-side of the system of equation.
dt : float
time-increment.
Returns
-------
u_n_plus_1 : array of float
solution at time-step n+1.
"""
return unm1 + 2.0*dt*f(u)

But wait ... what will we do at the *initial* time step, when we don't have information for $u_{n-1}$? This is an issue with all multi-step methods: we say that they are *not self-starting*. In the first time step, we need to use another method to get the first "kick"—either Euler's method or 2nd-order Runge Kutta could do: let's use RK2, since it's also second order.

For this calculation, we are going to reenter the model parameters in the code cell below, so that later on we can experiment here using the leapfrog method and different starting values. At the end of this notebook, we'll give you some other model parameters to try that will create a very interesting situation!

```
In [17]:
```# model parameters:
g = 9.8 # gravity in m s^{-2}
v_t = 4.9 # trim velocity in m s^{-1}
C_D = 1/5.0 # drag coefficient --- or D/L if C_L=1
C_L = 1.0 # for convenience, use C_L = 1
### set initial conditions ###
v0 = 6.5 # start at the trim velocity (or add a delta)
theta0 = -0.1 # initial angle of trajectory
x0 = 0.0 # horizotal position is arbitrary
y0 = 2.0 # initial altitude
# set time-increment and discretize the time
T = 15.0 # final time
dt = 0.01 # set time-increment
N = int(T/dt) + 1 # number of time-steps
# set initial conditions
u_leapfrog = numpy.empty((N, 4))
# initialize the array containing the solution for each time-step
u_leapfrog[0] = numpy.array([v0, theta0, x0, y0])
# first step using RK2
u_leapfrog[1] = rk2_step(u_leapfrog[0], f, dt)

```
In [18]:
```# use a for loop to call the function leapfrog_step()
for n in range(1,N-1):
u_leapfrog[n+1] = leapfrog_step(u_leapfrog[n-1], u_leapfrog[n], f, dt)

```
In [19]:
```# get the glider position in time
x_leapfrog = u_leapfrog[:,2]
y_leapfrog = u_leapfrog[:,3]
# get the index of element of y where altitude becomes negative
idx_negative_leapfrog = numpy.where(y_leapfrog<0.0)[0]
if len(idx_negative_leapfrog)==0:
idx_ground_leapfrog = N-1
print ('The glider has not reached the ground yet!')
else:
idx_ground_leapfrog = idx_negative_leapfrog[0]

```
In [21]:
```# plot the glider path
plt.figure(figsize=(11,8))
plt.subplot(121)
plt.grid(True)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.plot(x_leapfrog[:idx_ground_leapfrog], y_leapfrog[:idx_ground_leapfrog], color='k', ls='-', lw=2)
plt.title('distance traveled: {:.3f}'.format(x_leapfrog[idx_ground_leapfrog-1]), fontsize=18);
# Let's take a closer look!
plt.subplot(122)
plt.grid(True)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.plot(x_leapfrog[:idx_ground_leapfrog], y_leapfrog[:idx_ground_leapfrog], color='k', ls=':', lw=2)
plt.plot(x_rk2, y_rk2, 'r--', label='RK2')
plt.xlim(0,5)
plt.ylim(1.8,2.5);

```
```

```
In [22]:
```# check convergence rate
r = 2
h = 0.001
dt_values = numpy.array([h, r*h, r**2*h])
u_values = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for i, dt in enumerate(dt_values):
N = int(T/dt) + 1 # number of time-steps
### discretize the time t ###
t = numpy.linspace(0.0, T, N)
# initialize the array containing the solution for each time-step
u = numpy.empty((N, 4))
u[0] = numpy.array([v0, theta0, x0, y0])
# time loop
u[1] = rk2_step(u[0], f, dt)
for n in range(1, N-1):
u[n+1] = leapfrog_step(u[n-1], u[n], f, dt)
# store the value of u related to one grid
u_values[i] = u
# calculate the order of convergence
alpha = (log(get_diffgrid(u_values[2], u_values[1], dt_values[2]))
- log(get_diffgrid(u_values[1], u_values[0], dt_values[1]))) / log(r)
print('The order of convergence is alpha = {:.3f}'.format(alpha))

```
```

*The leapfrog method is a second-order method*. Good job!

Go back to the cell that re-enters the model parameters, just above the leapfrog-method time loop, and change the following: the initial height `y0`

to 25, and the final time `T`

to 36. Now re-run the leapfrog calculation and the two code cells below that, which extract the glider's position and plot it.

*What is going on?*

```
In [1]:
```from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, "r").read())

```
Out[1]:
```