```
In [1]:
```from scipy import *
from numpy import *
from matplotlib.pyplot import *
from scipy.linalg import *
from numpy.linalg import *
from scipy.integrate import solve_ivp
import FNC

```
In [2]:
```# This (optional) block is for improving the display of plots.
from IPython.display import set_matplotlib_formats
set_matplotlib_formats("svg","pdf")
rcParams["figure.figsize"] = [7,4]
rcParams["lines.linewidth"] = 2
rcParams["lines.markersize"] = 4
rcParams['animation.html'] = "jshtml" # or try "html5"

```
In [3]:
```f = lambda t,u: sin((t+u)**2);
tspan = [0.0,4.0];
u0 = [-1.0];

`solve_ivp`

function from `scipy.integrate`

package offers solvers for a variety of initial-value problems. Note that even though this is a problem for a scalar function $u(t)$, we had to set the initial condition as a "one-dimensional vector."

```
In [4]:
```sol = solve_ivp(f,tspan,u0)
print("t shape:",sol.t.shape)
print("u shape:",sol.y.shape)

```
```

```
In [5]:
```plot(sol.t,sol.y[0,:],"-o")
xlabel("$t$"); ylabel("$u(t)$");
title("Solution of $u'=sin((t+u)^2)$");

```
```

```
In [6]:
```sol = solve_ivp(f,tspan,u0,t_eval=linspace(0,4,200))
plot(sol.t,sol.y[0,:],"-")
xlabel("$t$"); ylabel("$u(t)$");
title("Solution of $u'=sin((t+u)^2)$");

```
```

```
In [7]:
```f = lambda t,u: sin((t+u)**2);
tspan = [0.0,4.0]
u0 = [-1.0]
sol = solve_ivp(f,tspan,u0,dense_output=True)
print(sol)

```
```

`t`

and `y`

fields are as before. But there is another field that can be called like any other function of $t$.

```
In [8]:
```print(sol.sol([0,1,2,3,4]))

```
```

The equation $u'=(u+t)^2$ gives us some trouble.

```
In [9]:
```f = lambda t,u: (t+u)**2
sol = solve_ivp(f,[0.,1.],[1.0])

We requested the solution for $0\le t \le 1$, but the integration did not complete that job.

```
In [10]:
```if not sol.success: print(sol.message)

```
```

```
In [11]:
```semilogy(sol.t,sol.y[0,:])
xlabel("$t$"); ylabel("$u(t)$");
title("Blowup in finite time");

```
```

```
In [12]:
```t = linspace(0,3,200)
u = array([ exp(t)*u0 for u0 in [0.7,1,1.3] ])
plot(t,u.T)
xlabel("$t$"); ylabel("$u(t)$");
title("Exponential divergence of solutions");

```
```

But with $u'=-u$, solutions actually get closer together with time.

```
In [13]:
```t = linspace(0,3,200)
u = array([ exp(-t)*u0 for u0 in [0.7,1,1.3] ])
plot(t,u.T)
xlabel("$t$"); ylabel("$u(t)$");
title("Exponential convergence of solutions");

```
```

We consider the IVP $u'=\sin[(u+t)^2]$ over $0\le t \le 4$, with $u(0)=-1$.

```
In [14]:
```f = lambda t,u: sin((t+u)**2)
tspan = [0.0,4.0]
u0 = -1.0

```
In [15]:
```t,u = FNC.eulerivp(f,tspan,u0,20)
fig,ax = subplots()
ax.plot(t,u,"-o",label="$n=20$")
xlabel("$t$"); ylabel("$u(t)$");
title("Solution by Euler's method");
legend();

```
```

```
In [16]:
```t,u = FNC.eulerivp(f,tspan,u0,200)
ax.plot(t,u,label="$n=200$")
ax.legend(); fig

```
Out[16]:
```

`solve_ivp`

to construct an accurate solution.

```
In [17]:
```sol = solve_ivp(f,tspan,[u0],dense_output=True,atol=1e-8,rtol=1e-8)
ax.plot(t,sol.sol(t)[0],"--",label="accurate")
ax.legend(); fig

```
Out[17]:
```

Now we can perform a convergence study.

```
In [18]:
```N = 50*2**arange(6)
err = zeros(6)
for (j,n) in enumerate(N):
t,u = FNC.eulerivp(f,tspan,u0,n)
err[j] = max( abs(sol.sol(t)[0]-u) )
print(f"{n:5d} {err[j]:9.3e}")

```
```

```
In [19]:
```loglog(N,err,"-o",label="results")
plot(N,0.05*(N/N[0])**(-1),"--",label="1st order")
xlabel("$n$"); ylabel("inf-norm error");
legend(); title("Convergence of Euler's method");

```
```

```
In [20]:
```A = array([ [-2,5], [-1,0] ])

```
In [21]:
```u0 = [1,0]
t = linspace(0,6,600) # times for plotting
u = zeros([2,600])
for j in range(600):
ut = dot(expm(t[j]*A),u0)
u[:,j] = ut

```
In [22]:
```plot(t,u[0,:],label="$u_1$")
plot(t,u[1,:],label="$u_2$")
xlabel("$t$"); ylabel("$u(t)$");
legend();

```
```

We encode the predator–prey equations via a function.

```
In [23]:
```def predprey(t,u):
y,z = u; # rename for convenience
s = (y*z) / (1+beta*y) # appears in both equations
return array([ y*(1-alpha*y) - s, -z + s ])

Note that the function accepts four inputs. In order to use it, we also have to assign values to `alpha`

and `beta`

.

To solve the IVP we must also provide the initial condition, which is a 2-vector here, and the interval for the independent variable.

```
In [24]:
```u0 = [1,0.01]
tspan = [0.,80.]
alpha = 0.1; beta = 0.25;
sol = solve_ivp(predprey,tspan,u0,dense_output=True);

```
In [25]:
```plot(sol.t,sol.y[0,:],label="prey")
plot(sol.t,sol.y[1,:],label="predator")
xlabel("$t$"); ylabel("population");
title("Predator-prey solution");

```
```

The plot above lacks smoothness because it uses just the discrete values, without interpolating between them.

When there are just two components, it's common to plot the solution in the *phase plane*, i.e., with $u_1$ and $u_2$ along the axes and time as a parameterization of the curve.

```
In [26]:
```t = linspace(0,80,1200)
u = sol.sol(t)
plot(u[0,:],u[1,:])
xlabel("prey"); ylabel("predator");
title("Predator-prey phase plane");

```
```

We consider the IVP $u'=\sin[(u+t)^2]$ over $0\le t \le 4$, with $u(0)=-1$.

```
In [27]:
```f = lambda t,u: sin((t+u)**2)
tspan = [0.0,4.0]
u0 = array([-1.0])

We use `solve_ivp`

to construct an accurate approximation to the exact solution.

```
In [28]:
```u_exact = solve_ivp(f,tspan,u0,dense_output=True,rtol=1e-13,atol=1e-13)

Now we perform a convergence study of our two Runge--Kutta implementations.

```
In [29]:
```n = 50*2**arange(6)
err_IE2 = zeros(6)
err_RK4 = zeros(6)
for j in range(6):
t,u = FNC.ie2(f,tspan,u0,n[j])
err_IE2[j] = max( abs(u_exact.sol(t)-u)[0] )
t,u = FNC.rk4(f,tspan,u0,n[j])
err_RK4[j] = max( abs(u_exact.sol(t)-u)[0] )

```
In [30]:
```loglog(2*n,err_IE2,"-o",label="IE2")
loglog(4*n,err_RK4,"-o",label="RK4")
plot(2*n,0.01*(n/n[0])**(-2),"--",label="2nd order")
plot(4*n,1e-6*(n/n[0])**(-4),"--",label="4th order")
xlabel("f-evaluations"); ylabel("inf-norm error");
legend(); title("Convergence of RK methods");

```
```

The fourth-order variant is more efficient in this problem over a wide range of accuracy.

Let's run adaptive RK on $u'=e^{t-u\sin u}$.

```
In [31]:
```f = lambda t,u: exp(t-u*sin(u))
t,u = FNC.rk23(f,[0.,5.],array([0.0]),1e-5)
scatter(t,u)
xlabel("$t$"); ylabel("$u(t)$");
title("Adaptive IVP solution");

```
```

```
In [32]:
```dt = [t[i+1] - t[i] for i in range(t.size-1)]
semilogy(t[:-1],dt)
xlabel("$t$"); ylabel("time step");
title("Adaptive step sizes");

```
```

If we had to run with a uniform step size to get this accuracy, it would be

```
In [33]:
```print("min step:",min(dt))

```
```

On the other hand, the average step size that was actually taken was

```
In [34]:
```print("average step:",mean(dt))

```
```

```
In [35]:
```f = lambda t,u: (t+u)**2
t,u = FNC.rk23(f,[0,1],array([1.0]),1e-5)

```
```

```
In [36]:
```semilogy(t,u[0],"-o")
xlabel("$t$"); ylabel("$u(t)$");
title("Finite-time blowup");

```
```

We consider the IVP $u'=\sin[(u+t)^2]$ over $0\le t \le 4$, with $u(0)=-1$.

```
In [37]:
```f = lambda t,u: sin((t+u)**2)
tspan = [0.0,4.0]
u0 = array([-1.0])

We use a `solve_ivp`

to construct an accurate approximation to the exact solution.

```
In [38]:
```u_exact = solve_ivp(f,tspan,u0,dense_output=True,rtol=1e-13,atol=1e-13)

Now we perform a convergence study of the AB4 code.

```
In [39]:
```n = 10*2**arange(6)
err = zeros(6)
for j in range(6):
t,u = FNC.ab4(f,tspan,u0,n[j])
err[j] = max( abs(u_exact.sol(t)-u)[0] )

The method should converge as $O(h^4)$, so a log-log scale is appropriate for the errors.

```
In [40]:
```loglog(n,err,"-o",label="AB4")
loglog(n,0.1*(n/n[0])**(-4),"--",label="4th order")
xlabel("$n$"); ylabel("inf-norm error");
legend(); title("Convergence of AB4");

```
```

The following simple ODE uncovers a surprise.

```
In [41]:
```f = lambda t,u: u**2 - u**3
u0 = array([0.005])

We will solve the problem first with the implicit AM2 method using $n=200$ steps.

```
In [42]:
```tI,uI = FNC.am2(f,[0.,400.],u0,200);
fig,ax = subplots()
ax.plot(tI,uI[0],label="AM2")
xlabel("$t$"); ylabel("$y(t)$");

```
```

Now we repeat the process using the explicit AB4 method.

```
In [43]:
```tE,uE = FNC.ab4(f,[0.,400.],u0,200);
ax.scatter(tE,uE[0],label="AB4")
ax.set_ylim([-4,2]);
ax.legend(); fig

```
Out[43]:
```

Once the solution starts to take off, the AB4 result goes catastrophically wrong.

```
In [44]:
```uE[0,104:111]

```
Out[44]:
```

We hope that AB4 will converge in the limit $h\to 0$, so let's try using more steps.

```
In [45]:
```plot(tI,uI[0],color="k",label="AM2")
tE,uE = FNC.ab4(f,[0,400],u0,1000);
plot(tE,uE[0],".",label="AM4, n=1000")
tE,uE = FNC.ab4(f,[0,400],u0,1600);
plot(tE,uE[0],".",label="AM4, n=1600")
xlabel("$t$"); ylabel("$y(t)$")
legend();

```
```