QuTiP Example: Quantum System Subject to to Coherent Feedback with Discrete Time-Delay

Arne L. Grimsmo
Université de Sherbrooke
arne.grimsmo@gmail.com $\newcommand{\ket}[1]{\left|#1\right\rangle}$ $\newcommand{\bra}[1]{\left\langle#1\right|}$

Introduction

This notebook shows how to use the memorycascade module, one of the modules for non-Markovian systems in qutip. This module is an implementation of the method introduced in Phys. Rev. Lett 115, 060402 (2015) (arXiv link) to integrate the dynamics of open quantum systems coupled to a coherent feedback loop with a time-delay.

At the end of the notebook we also show how the memorycascade module can be used in conjunction with the transfertensormethod module.

In this notebook we consider a paradigmatic quantum optics example of a system subject to coherent feedback, namely a two-level atom in front of a mirror. The setup is illustrated in the figure below:

An atom is placed a distance $l$ in front of a mirror. The incomming field on the left side, $b_{\text{in}}(t)$, we take to be a vacuum field. The field on the right side of the atom, i.e., the field between the atom and the mirror, creates a coherent feedback loop with time-delay $\tau = l/c$, where $c$ is the speed of light. The atom couples to the input field via a system operator $L_1$, and to the returning feedback field via a system operator $L_2$. We assume that an arbitrary phase shift, $\phi$, can be applied to the feedback field (e.g., there could be a phase-shifter placed in the loop [not shown]). In addition, there can be Markovian non-radiative decay, described by a rate $\gamma_{\rm nr}$. The red arrow denotes a classical drive field, assumed to couple to the atom via a side-channel.

Preamble

Imports


In [1]:
import numpy as np
import scipy as sp

import qutip as qt
from qutip.ipynbtools import version_table

import qutip.nonmarkov.memorycascade as mc

%matplotlib inline
import matplotlib.pyplot as plt

Problem setup


In [2]:
gamma = 1.0 # coupling strength to feedback reservoir
gamma_nr = 0.2*gamma # non-radiative decay
eps = 1.0*np.pi*gamma # classical drive strength, eps/2 = Rabi frequency
delta = 0. # detuning from the drive frequency

tau = np.pi/(eps) # time-delay, chosen to exactly match the Rabi period due to the drive
print('tau=', tau)
phi = 1.0*np.pi # phase shift in feedback loop

# Hamiltonian and jump operators
H_S = delta*qt.sigmap()*qt.sigmam() + 1j*eps*(qt.sigmam()-qt.sigmap())
# coupling at first port of feedback loop (into the loop)
L1 = sp.sqrt(gamma)*qt.sigmam()
# coupling at second port of feedback loop (out of the loop)
L2 = sp.exp(1j*phi)*L1

# Markovian decay channels
c_ops_markov = [sp.sqrt(gamma_nr)*qt.sigmam()]

# initial state
rho0 = qt.ket2dm(qt.basis(2,0)) # atom start in the excited state

# integration times
times = np.arange(0.0, 3.0*tau, 0.01*tau)


tau= 1.0

Memory cascade simulation

The memory cascade method works by mapping the non-Markovian feedback problem onto a problem of $k$ identical cascaded quantum systems, where $(k-1)\tau < t < k\tau$ for a time $t$.

To use the memory cascade method in qutip, first create a MemoryCascade object. The syntax is

sim = MemoryCascade(H_S, L1, L2, S_matrix=None, c_ops_markov=None, integrator='propagator', paralell=False, options=None)

where

H_S is a system Hamiltonian (or a Liouvillian).

L1 and L2 are either single system operators, or lists of system operators. L1 couples the system into the feedback loop, and L2 couples out of the loop. If L1 and L2 are lists, the optional argument S_matrix can be used to specify an $S$-matrix that determines which operator in L1 couples to which operator in L2 (note that L1 and L2 must have the same number of elements). By default S_matrix will be set to an $n \times n$ identity matrix, where n is the number of elements in L1/L2. Having multiple coupling operators into and out of the feedback loop can for example be used to describe composite systems emitting at multiple frequencies. The $S$-matrix can then be used to include, e.g., beam splitters mixing the different signals in the feedback loop.

c_ops_markov is an optional list of additional Lindblad operators describing conventional Markovian noise channels, e.g., non-radiative decay.

integrator is a string which can be either 'propagator' or 'mesolve', referring to which method will be used to integrate the dynamics. "propagator" tends to be faster for larger systems (longer times)

parallel if set to True means the time-integration is parallelized. This is only implemented for integrator='propagator'

options an instance of the qutip.Options class for genereic solver options, used in internal calls to qutip.propagator().


In [3]:
sim = mc.MemoryCascade(H_S, L1, L2, c_ops_markov=c_ops_markov, integrator='mesolve')

Reduced atom dynamics

To compute the reduced density matrix of the atom at time $t$ with time-delay $\tau$, simply call the method rhot of the MemoryCascade object.


In [4]:
%time rho = [sim.rhot(rho0, t, tau) for t in times]


CPU times: user 27.9 s, sys: 316 ms, total: 28.2 s
Wall time: 16.8 s

Now lets plot the atomic inversion as a function of time:


In [5]:
fig, ax = plt.subplots(figsize=(10, 7))
ax.plot(times, qt.expect(qt.sigmaz(), rho), linewidth=3.0)
ax.set_xlabel(r'$\gamma t$', fontsize=20)
ax.set_ylabel(r'$\langle \sigma_z \rangle$', fontsize=20)


Out[5]:
<matplotlib.text.Text at 0x112563e48>

Output field dynamics

The MemoryCascade class also has a convenient method called outfieldcorr that allows you to compute any ouput field correlation function of the type

$$ \langle c_1(t_1) c_{2}(t_{2}) \dots c_n(t_n) \rangle $$

where each $c_i(t_i)$ is one of $b_{\rm out}(t_i)$ or $b_{\rm out}^\dagger (t_i)$ (see the figure at the top). Below we use outfieldcorr to compute the photon number and the $g^{(2)}(0,t)$ correlation function of the output field.

The syntax of outfieldcorr is

outfieldcorr(rho0, blist, tlist, tau)

where

rho0 is the atom's initial state

blist is a list of integers specifying the operators $c_i$, where an entry of 1 means $b_{\rm out}$ and an entry of 2 means $b_{\rm out}^\dagger$. So, for example blist = [1, 2, 2, 1] means that we want to compute $\langle b_{\rm out}(t_1) b_{\rm out}^\dagger(t_2) b_{\rm out}^\dagger(t_3) b_{\rm out}(t_4)\rangle$.

tlist is the corresponding list of times, $t_1, t_2, \dots, t_n$.

tau is as usual the time-delay.

Output field photon number


In [6]:
%time bdb = [sim.outfieldcorr(rho0, [2, 1], [t, t], tau) for t in times]


CPU times: user 28.2 s, sys: 338 ms, total: 28.5 s
Wall time: 17.2 s

In [7]:
fig, ax = plt.subplots(figsize=(10, 7))
ax.plot(times, bdb, linewidth=3.0)
ax.set_xlabel(r'$\gamma t$', fontsize=20)
ax.set_ylabel(r'$\langle b^\dagger b \rangle$', fontsize=20)


Out[7]:
<matplotlib.text.Text at 0x11296d8d0>

Output field second order correlation function


In [8]:
%time g2 = [sim.outfieldcorr(rho0, [2, 1, 2, 1], [times[i], times[i], 0., 0.], tau)/(bdb[0]*bdb[i]) for i in range(len(times))]


CPU times: user 26.1 s, sys: 283 ms, total: 26.4 s
Wall time: 15.9 s

In [9]:
fig, ax = plt.subplots(figsize=(10,7))
ax.plot(times, g2, linewidth=3.0)
ax.set_xlabel(r'$\gamma t$', fontsize=20)
ax.set_ylabel(r'$g^{(2)}(0,t)$', fontsize=20)


Out[9]:
<matplotlib.text.Text at 0x112a7e048>

Extrapolate to large times using the Transfer Tensor Method

Since the memory cascade method maps the non-Markovian problem onto a chain of $k$ cascaded systems, where $(k-1)\tau < t < k\tau$, it is intractable for large times due to the exponential growth of the Hilbert space with $k$.

A useful approach is therefore to use the memory cascade method in conjunction with the Transfer Tensor Method (TTM), implemented in qutip in the transfertensormethod module in the nonmarkov subpackage.

The usage of the transfertensormethod module is discussed in more detail in the example-transfer-tensor-method notebook.


In [10]:
import qutip.nonmarkov.transfertensor as ttm

Construct a list of exact timepropagators to learn from

The MemoryCascade class also has a method propagator that returns the time-propagator for the atom at a time $t$, i.e., the superoperator $\mathcal{E}(t)$ such that

$$ \rho(t) = \mathcal{E}(t)\rho(0), $$

where $\rho(t)$ is the state of the atom. We compute a list of exact propagators $\mathcal{E}(t_k)$ for a set of "learning times" $t_k$, which we then use as input to the TTM.


In [11]:
learningtimes = np.arange(0, 3*tau, 0.1*tau) # short times to learn from
%time learningmaps = [sim.propagator(t, tau) for t in learningtimes] # generate exact dynamical maps to learn from


CPU times: user 2.71 s, sys: 48 ms, total: 2.76 s
Wall time: 1.74 s

Compute approximate solution for long times using the TTM


In [12]:
longtimes = np.arange(0, 10*tau, 0.1*tau) # long times for extrapolation
%time ttmsol = ttm.ttmsolve(learningmaps, rho0, longtimes) # extrapolate using TTM


CPU times: user 1.26 s, sys: 14.3 ms, total: 1.27 s
Wall time: 1.23 s

Plot and compare


In [13]:
fig, ax = plt.subplots(figsize=(10, 7))
ax.plot(times, qt.expect(qt.sigmaz(), rho), linewidth=3.0)
ax.plot(longtimes, qt.expect(qt.sigmaz(), ttmsol.states), '--k', linewidth=3.0)
ax.set_xlabel(r'$\gamma t$', fontsize=20)
ax.set_ylabel(r'$\langle \sigma_z \rangle$', fontsize=20)


Out[13]:
<matplotlib.text.Text at 0x112dd33c8>

Discussion

The above example shows how the memory cascade method can work well in conjunction with the TTM. The list of learning times necessary to get good result with the TTM will wary from problem to problem and from parameter set to parameter set. There is also no guarantee for the result being correct, but one can check convergence with increasing learning times.


In [14]:
version_table()


Out[14]:
SoftwareVersion
QuTiP4.2.0
Numpy1.13.1
SciPy0.19.1
matplotlib2.0.2
Cython0.25.2
Number of CPUs2
BLAS InfoINTEL MKL
IPython6.1.0
Python3.6.1 |Anaconda custom (x86_64)| (default, May 11 2017, 13:04:09) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]
OSposix [darwin]
Wed Jul 19 22:19:57 2017 MDT

In [ ]: