Integrating with SciPy and QuTiP

By Thomas P Ogden (t.p.ogden@durham.ac.uk)

Here we introduce the Scipy ODE integration pack and the QuTiP solver for the specific problem of solving the Schrödinger equation (for a closed quantum system) and Lindblad master equation (for an open quantum system).


In [1]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

Integrating with SciPy

Now we've gone through writing a bunch of ODE integrators, we're at the point where we find out why we'll (almost) never need to use them. Because like any good numerical library, SciPy has advanced solvers already available to us in its scipy.integrate toolbox. For example, the odeint method allows us to:

Solve a system of ordinary differential equations using lsoda from the FORTRAN library odepack.

These solvers use highly optimised methods with adaptive stepsizes, and can solve stiff or non-stiff problems. We always want to use them where we can. Let's test odeint with our exp ODE in the same way we did with the methods we wrote.


In [6]:
def exp(y, t, args):
    """ An exponential function described as a first-order ODE. """
    
    dydt = args['a']*y
    return dydt

In [24]:
y_0 = np.array([1.]) # Initial condition

t = np.linspace(0., 5., 100+1) # Array of time steps

solve_args = {'a':1}

In [25]:
from scipy.integrate import odeint

y = odeint(exp, y_0, t, args=(solve_args,))

plt.plot(t,y,'r', label='odeint')
plt.plot(t, np.exp(solve_args['a']*t), 'k--', label='Known')

plt.legend(loc=2)


Out[25]:
<matplotlib.legend.Legend at 0x109f02b50>

So we see that the odeint integrator solved the system to a high level of accuracy – the two lines are indistinguishable at this scale. You might notice the basic methods we wrote had a similar interface to odeint — that was deliberate. But odeint can take a lot more arguments to optimise it for particular problems. I recommend reading through the documentation to understand the sorts of things you can do with it.

A Delayed Motivation

So why did we go through Parts 1 to 3, if a better integration toolbox already exist? My hope is that by examining simple solvers yourself, you now:

  1. understand the principles of what scipy.integrate is doing. It's no longer a black box, it's just a more advanced version of what we just made;
  2. understand the principles of accuracy, stability and computational cost in ODE integrators.

You many also find specific problems where the SciPy method isn't appropriate, so you'll need to write your own.

 Integrating the Schrödinger Equation and Lindblad Master Equation with QuTiP

Note: All of the below uses QuTip v2.2.0.

In atom optics and quantum optics, the most common ODEs for us to be solving are those related to the evolution of a quantum system. QuTiP (The Quantum Toolbox in Python) is an excellent library for dealing with quantum systems, so we'll look at a basic example of solving evolution ODEs.

We'll take a two-level atom with ground state $|g\rangle$ and excited state $|e\rangle$ and introduce an interaction with classical electromagnetic field.

We'll write the vector representation of the system as

$$ \Psi = \left[ \begin{array}{c} c_g \\ c_e \end{array} \right]. $$

Applying the electric dipole and rotating-wave approximations, the Hamiltonian is given by

$$ \mathcal{H} = \hbar \left[ \begin{array}{cc} 0 & \Omega^*/2 \\ \Omega/2 & -\Delta \end{array} \right] $$

where $\Omega$ is the Rabi frequency and $\Delta$ the laser detuning.

Unitary Evolution with the Schrödinger Equation

To begin with we'll neglect spontaneous decay of the excited state, such that we have a closed quantum system. The dynamics of such a system are described by the time-dependent Schrödinger equation,

$$ \mathrm{i} \hbar \frac{\partial \Psi}{\partial t} = \mathcal{H} \Psi $$

.

Let's see how to solve this ODE — the time-dependent Schrödinger equation — with QuTiP.


In [4]:
import qutip as qu

First we'll specify the Hamiltonian.


In [5]:
Omega = 2*np.pi*10. # [2π MHz]
Delta = 2*np.pi*0. # [2π MHz]

H = qu.Qobj([[      0., Omega/2.],
             [Omega/2.,   -Delta]])

Now we'll specify the list of time points we want to solve the system over, and the intial condition — we'll start with all population in the ground state.


In [6]:
t_max_in_pi = 5
t_range = np.linspace(0, t_max_in_pi*np.pi/Omega, 100+1) # [µs]

psi_0 = qu.basis(2,0) # Initial condition, all in ground state

To solve the system we use the qu.mesolve method. This has 5 required parameters:

  • the Hamiltonian to be solved (we'll pass H)
  • the initial condition (psi_0)
  • the list of time points we want to solve the system over (t_range)
  • a list of collapse operators. For a closed system we have no collapse operators, so this is blank ([])
  • a list of operators whose expectation values we wish to find at each time point. If we leave this blank, we just get back the quanutm state $\Psi$ which is all we want for now (so we pass []).

In [7]:
ode_data = qu.mesolve(H, psi_0, t_range, [], []) # Solve with QuTiP solver

print ode_data


Odedata object with mesolve data.
---------------------------------
states = True
num_collapse = 0

The return variable is an instance of the class qu.Odedata. As described in the documentation:

This object is a qutip.Odedata class that stores all the crucial data needed for analyzing and plotting the results of a simulation.

It's worth reading the rest of the docs for this class to see what you can do with it. As we specified an empty list of operators for which to find expectation values, the states property of ode_data contains a list of state vectors $[c_e; c_g]$ for each time point in t_range.

We can then extract $c_g$ and $c_e$ and plot them over time.


In [8]:
c_g = np.zeros(len(t_range), dtype=np.complex)
c_e = np.zeros(len(t_range), dtype=np.complex)

for i, state in enumerate(ode_data.states):
    c_g[i] = state[0,0]
    c_e[i] = state[1,0]    

plt.plot(t_range, abs(c_g)**2, 'b', label=r'$c_g$')
plt.plot(t_range, abs(c_e)**2, 'r', label=r'$c_e$')
plt.xlabel(r't ($\mu$s)')
plt.legend()


Out[8]:
<matplotlib.legend.Legend at 0x5311b10>

So we see the probability of finding the atom in the excited state $|c_e|^2$ oscillates at the Rabi freqency $\Omega$.

Non-Unitary Evolution with the Lindblad Master Equation

If we wish to include spontanous decay to the environment, we need to follow non-unitary evolution of the system. To do this we need switch from following the state vector to the density operator $\rho$, and instead of the Schrödinger Equation we follow the evolution with the Lindblad master equation.

$$ \dot{\rho}(t) = \frac{1}{\mathrm{i}\hbar} \left[ \mathcal{H}(t), \rho(t) \right] + \mathcal{L} $$

Where the Lindblad superoperator $\mathcal{L}$ is given by

$$ \mathcal{L} = \sum_n \left[ C_n \rho C_n^\dagger - \tfrac{1}{2} \left( \rho C_n^\dagger C_n + C_n^\dagger C_n \rho \right) \right] $$

The collapse operators $C_n$ are the operators through which the system couples to the environment. In the example of including sponetanous decay from $\left| e \right>$ to $\left| g \right>$ in our two-level atom, we just have one collapse item

$$ C = \sqrt{\Gamma} \left| e \right> \left< g \right| $$

where $\Gamma$ is the natural decay rate.

In QuTip, to solve the master equation we just use the qu.mesolve method as before, but this time pass our collapse operator instead of an empty list.


In [48]:
Gamma = 2*np.pi*1. # [2π MHz]

# Collapse operator
C_Gamma = qu.Qobj([[      0., np.sqrt(Gamma)],
                  [       0.,             0.]])

# Solve with QuTiP solver
ode_data = qu.mesolve(H, psi_0, t_range, [C_Gamma], [])

for i, state in enumerate(ode_data.states):
    c_g[i] = state[0,0]
    c_e[i] = state[1,1]    

plt.plot(t_range, c_g.real, 'b', label=r'$c_g$')
plt.plot(t_range, c_e.real, 'r', label=r'$c_e$')
plt.xlabel(r't ($\mu$s)')
plt.legend()


Out[48]:
<matplotlib.legend.Legend at 0x8b70e90>

Now we see the spontaneous decay acts as a damping on the oscillations and the system tends toward a steady state (similar to friction in the pendulum in Example 2).

QuTip has many more useful features you'll find on its website.

Conclusion

We've looked at some basic numerical integration methods: Explicit Euler, Runge-Kutta, introduced multistep methods with the Adams-Bashforth Two-Step method and solving stiff problems with the Implicit Euler method. We've looked at how to rewrite a higher-order ODE as a system of first order ODEs so that it can be solved. We've looked at the importance of accuracy, stability and computational cost in choosing numerical methods. Finally, we've introduced the SciPy ODE integration package and the QuTiP solver for the time-evolution of closed and open quantum systems.

Many more familes of integration methods exist, some general and some useful for particular classes of problems. And there are many ways of optimising and tuning these methods, such as adaptive stepsize and use of the system Jacobian. Numerical Recipes (see References) is a good place to start further reading.

References

  • Jos Thijssen. Computational Physics. (Cambridge University Press, March 2007).
  • William H. Press et al. Numerical Recipes 3rd Edition: The Art of Scientific Computing (Cambridge University Press, August 2007)
  • Lennart Edsberg. An Introduction to Computation and Modeling for Differential Equations. (Wiley-Blackwell, August 2008)
  • SciPy: Fundamental library for scientific computing
  • QuTip: Quantum Toolbox in Python

In [ ]: