In [1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$with the initial condition:
$$ y(x_0)=y_0 $$Euler's method performs updates using the equations:
$$ y_{n+1} = y_n + h f(y_n,x_n) $$$$ h = x_{n+1} - x_n $$Write a function solve_euler
that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
In [2]:
def solve_euler(derivs, y0, x):
"""Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
"""
h = x[1]-x[0]
y = np.zeros_like(x)
y[0] = y0
for i in range(len(x)-1):
y[i+1] = y[i] + h*derivs(y[i],x[i])
return y
In [3]:
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:
$$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$Write a function solve_midpoint
that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
In [4]:
def solve_midpoint(derivs, y0, x):
"""Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
"""
h = x[1]-x[0]
y = np.zeros_like(x)
y[0] = y0
for i in range(len(x)-1):
y[i+1] = y[i] + h*derivs(y[i]+(h/2)*derivs(y[i],x[i]),x[i]+h/2)
return y
In [5]:
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
You are now going to solve the following differential equation:
$$ \frac{dy}{dx} = x + 2y $$which has the analytical solution:
$$ y(x) = 0.25 e^{2x} - 0.5 x - 0.25 $$First, write a solve_exact
function that compute the exact solution and follows the specification described in the docstring:
In [6]:
def solve_exact(x):
"""compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
"""
def function(point):
return .25*np.exp(2*point) - .5*point - .25
y = np.array([function(i) for i in x])
return y
In [7]:
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
In the following cell you are going to solve the above ODE using four different algorithms:
odeint
Here are the details:
derivs
function for the above differential equation.solve_euler
, solve_midpoint
, odeint
and solve_exact
functions to compute
the solutions using the 4 approaches.Visualize the solutions on a sigle figure with two subplots:
Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.
While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
In [51]:
f, (ax1, ax2) = plt.subplots(2, 1,figsize=(12,8))
ax1.plot(x,y1,label = 'Euler',color='red');
ax1.plot(x,y2,label = 'Midpoint',color='blue');
ax1.plot(x,y3,label = 'Exact',color='black');
ax1.plot(x,y4,label = 'ODE',color='green');
ax1.set_title('Four Differential Methods')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax2.plot(x,(abs(y1-y3)),label = 'Euler Difference',color='red');
ax2.plot(x,(abs(y2-y3)),label = 'Midpoint Difference',color='blue');
ax2.plot(x,(abs(newy4-y3)),label = 'ODE Difference',color='green');
ax2.set_title('Error for Each Differential Method');
ax2.set_ylim(-.1,1);
ax2.set_xlabel('x')
ax2.set_ylabel('y')
plt.tight_layout()
ax2.legend();
ax1.legend();
I worked with Hunter, Jessica, and Brett.
In [10]:
assert True # leave this for grading the plots