Stiff Problems, Implicit Methods and Computational Cost

By Thomas P Ogden (t.p.ogden@durham.ac.uk)


In [13]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

Stiff Problems


In [14]:
y_0 = np.array([1.]) # Initial condition

N = 10 # Num of steps to take
t_max = 5.
t = np.linspace(0., t_max, N+1) # Array of time steps

solve_args = {}
solve_args['a'] = -15.

In [15]:
from exp import exp
from ode_int import ode_int_ee

y = ode_int_ee(exp, y_0, t, solve_args) # Solve the ODE with Explicit Euler

plt.plot(t, y[:,0], 'b-o', label='Explicit Euler')
plt.plot(t, np.exp(solve_args['a']*t), 'k--', label='Known')
plt.xlabel(r'$t$')
plt.ylabel(r'$y(t)$')
plt.legend(loc=2)

plt.show()


Stability

[[ Describe stability regions and why the exlicit Euler method is unstable for the above problem ]]

 Implicit Euler

[[ Write an implicit Euler solver here and solve the above problem]]

Computational Cost

If the Runge-Kutta method is $\mathcal{O}(h^4)$, why would we ever use a less accurate method like Explicit Euler? The answer of course is the RK method we wrote has more computation steps, so we expect it to be more expensive in computer time. If accuracy were the only concern we could pick a method truncating the Taylor expansion way up the series to some huge order, but it might take an age to run and we likely don't need that level of accuracy. We make a trade-off: a sufficient level of accuracy in our solution and a reasonable amount of time to find it.

(A Note on Notation: It's confusing, but we use 'Big O Notation' to quantify the computational complexity of a method, as well as the order of accuracy like we've already seem. I'll use the blackboard bold $\mathbb{O}$ to distinguish computational cost from calligraphic $\mathcal{O}$ for order of accuracy.)

As both the Explicit Euler and Runge-Kutta methods loop once through $N$ steps, executing a sequence of instructions, we expect both to have a computational cost of order $\mathbb{O}(N^1)$ i.e. that the problem will be solved in linear time. When $N$ doubles, so should the running time in both cases. But with its computation of the interval $k$-points, we expect the Runge-Kutta method to be slower. We can check this with Python's timeit module.


In [16]:
from exp import exp
import timeit

y_0 = np.array([1.]) # Initial condition

solve_args = {}
solve_args['a'] = 1.

t_max = 5.

from ode_int import ode_int_ee, ode_int_rk, ode_int_ab

def solve_exp_over_a(method, N, max_a):
    num_a_steps = 10
    t = np.linspace(0, 5., N+1)
    if method == 'ee':
        ode_int_exp = lambda a: ode_int_ee(exp, y_0, t, dict(solve_args, a=a))
    elif method == 'rk':
        ode_int_exp = lambda a: ode_int_rk(exp, y_0, t, dict(solve_args, a=a))
    elif method == 'ab':
        ode_int_exp = lambda a: ode_int_ab(exp, y_0, t, dict(solve_args, a=a))
#     elif method == 'ie':
#         ode_int_exp = lambda a: ode_int_ie(exp, y_0, t, dict(solve_args, a=a))
    for a in np.linspace(1., max_a, num_a_steps+1):
        ode_int_exp(a)               
        
N_list = [2, 20, 200, 2000]        

ee_time = np.zeros(len(N_list))
rk_time = np.zeros(len(N_list))
ab_time = np.zeros(len(N_list))
ie_time = np.zeros(len(N_list))

for i, N in enumerate(N_list):

    print 'N: %d'%N  
  
    ee_time[i] = min(timeit.Timer('solve_exp_over_a("ee", %d, 5)'%N,
                    setup='from __main__ import solve_exp_over_a').repeat(7, 1))    
    rk_time[i] = min(timeit.Timer('solve_exp_over_a("rk", %d, 5)'%N,
                    setup='from __main__ import solve_exp_over_a').repeat(7, 1))
    ab_time[i] = min(timeit.Timer('solve_exp_over_a("ab", %d, 5)'%N,
                    setup='from __main__ import solve_exp_over_a').repeat(7, 1))
#     ie_time[i] = min(timeit.Timer('solve_exp_over_a("ie", %d, 5)'%N,
#                     setup='from __main__ import solve_exp_over_a').repeat(7, 1))


N: 2
N: 20
N: 200
N: 2000

In [17]:
plt.plot(N_list, ee_time, label='Explicit Euler')    
plt.plot(N_list, rk_time, label='Runge-Kutta')
plt.plot(N_list, ab_time, label='Adams-Bashforth')
# plt.plot(N_list, ie_time, label='Implicit Euler')
plt.legend(loc=2)


Out[17]:
<matplotlib.legend.Legend at 0x3876c10>

Next

In Part 4 we'll look at using the Scipy ODE integration pack and the QuTiP solver for the specific problem of solving the Schrödinger equation (for a closed quantum system) and Lindblad master equation (for an open quantum system).