Partial Differential Equations

If there's anything that needs serious computing power it's the solution of PDEs. However, you can go a long way to getting intuition on complex problems with simple numerical methods.

Let's take parts of the doped semiconductor model, concentrating on the time-dependent behaviour, simplify the constants, and restrict to one spatial dimension:

\begin{align} \frac{\partial p}{\partial t} + \frac{\partial}{\partial x} \left( p \frac{\partial}{\partial x} \left( \frac{1}{p} \right) \right) &= \left( n_i^2 - np \right) + G, \\ \frac{\partial n}{\partial t} + \frac{\partial}{\partial x} \left( n \frac{\partial}{\partial x} \left( \frac{1}{n} \right) \right) &= \left( n_i^2 - np \right) + G. \end{align}

This is a pair of coupled PDEs for $n, p$ in terms of physical and material constants, and the quasi-Fermi levels $E_{F_{n,p}}$ depend on $n, p$ - here we've replaced them with terms proportional to $\{n, p\}^{-1}$.

We'll write this in the form

\begin{equation} \frac{\partial {\bf y}}{\partial t} + \frac{\partial}{\partial x} \left( {\bf g}({\bf y}) \frac{\partial}{\partial x} {\bf h}({\bf y}) \right) = {\bf f}({\bf y}). \end{equation}

Finite differencing

We used finite differencing when looking at IVPs. We introduced a grid of points $x_j$ in space and replace derivatives with finite difference expressions. For example, we saw the forward difference approximation

\begin{equation} \left. \frac{\text{d} y}{\text{d} x} \right|_{x = x_j} = \frac{y_{j+1} - y_j}{\Delta x} + {\cal O} \left( \Delta x^1 \right), \end{equation}

and the central difference approximation

\begin{equation} \left. \frac{\text{d} y}{\text{d} x} \right|_{x = x_j} = \frac{y_{j+1} - y_{j-1}}{2 \Delta x} + {\cal O} \left( \Delta x^2 \right). \end{equation}

This extends to partial derivatives, and to more than one variable. We introduce a grid in time, $t^n$, and denote $y(t^n, x_j) = y^n_j$. Then we can do, say, forward differencing in time and central differencing in space:

\begin{align} \left. \frac{\partial y}{\partial t} \right|_{x = x_j, t = t^n} &= \frac{y^{n+1}_{j} - y^{n}_{j}}{\Delta t}, \\ \left. \frac{\partial y}{\partial x} \right|_{x = x_j, t = t^n} &= \frac{y^{n}_{j+1} - y^{n}_{j-1}}{2 \Delta x}. \end{align}

Diffusion equation

To illustrate the simplest way of proceeding we'll look at the diffusion equation

\begin{equation} \frac{\partial y}{\partial t} - \frac{\partial^2 y}{\partial x^2} = 0. \end{equation}

When finite differences are used, with the centred second derivative approximation

\begin{equation} \left. \frac{\text{d}^2 y}{\text{d} x^2} \right|_{x = x_j} = \frac{y_{j+1} + y_{j-1} - 2 y_{j}}{\Delta x^2} + {\cal O} \left( \Delta x^2 \right), \end{equation}

we find the approximation

\begin{align} && \frac{y^{n+1}_{j} - y^{n}_{j}}{\Delta t} - \frac{y^{n}_{j+1} + y^{n}_{j-1} - 2 y^{n}_{j}}{\Delta x^2} &= 0 \\ \implies && y^{n+1}_{j} &= y^{n}_{j} + \frac{\Delta t}{\Delta x^2} \left( y^{n}_{j+1} + y^{n}_{j-1} - 2 y^{n}_{j} \right). \end{align}

Given initial data and boundary conditions, this can be solved.

Let's implement the heat equation with homogeneous Dirichlet boundary conditions ($y(t, 0) = 0 = y(t, 1)$) and simple initial data ($y(0, x) = x (1 - x)$), using a spatial step size of $\Delta x = 10^{-2}$ and a time step of $\Delta t = 10^{-5}$, solving to $t = 0.1$ ($10000$ steps).


In [ ]:
from __future__ import division
import numpy
from matplotlib import pyplot
%matplotlib notebook

In [ ]:
dt = 1e-5
dx = 1e-2
x = numpy.arange(0,1+dx,dx)
y = numpy.zeros_like(x)
y = x * (1 - x)

In [ ]:
def update_heat(y, dt, dx):
    dydt = numpy.zeros_like(y)
    dydt[1:-1] = dt/dx**2 * (y[2:] + y[:-2] - 2*y[1:-1])
    return dydt

In [ ]:
Nsteps = 10000
for n in range(Nsteps):
    update = update_heat(y, dt, dx)
    y += update

In [ ]:
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
Note

The animations that follow will only work if you have ffmepg installed. Even then, they may be touchy. The plots should always work.


In [ ]:
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.html'] = 'html5'

In [ ]:
y = numpy.zeros_like(x)
y = x * (1 - x)
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0,0.3))
line, = ax.plot([], [])
pyplot.close()

def init():
    ax.set_xlabel(r"$x$")

def update(i, y):
    for n in range(100):
        y += update_heat(y, dt, dx)
    line.set_data(x, y)
    return line

animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=100, interval=100, blit=True)

The solution looks good - smooth, the initial profile is diffusing nicely. Try with something a bit more complex, such as $y(0, x) = \sin^4(4 \pi x)$ to see the diffusive effects:


In [ ]:
y = numpy.sin(4.0*numpy.pi*x)**4
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()

In [ ]:
Nsteps = 10000
for n in range(Nsteps):
    update = update_heat(y, dt, dx)
    y += update

In [ ]:
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()

All the features are smoothing out, as expected.


In [ ]:
y = numpy.zeros_like(x)
y = numpy.sin(4.0*numpy.pi*x)**4
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0,1))
line, = ax.plot([], [])
pyplot.close()

def init():
    ax.set_xlabel(r"$x$")

def update(i, y):
    for n in range(20):
        y += update_heat(y, dt, dx)
    line.set_data(x, y)
    return line

animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=50, interval=100, blit=True)

However, we used a really small timestep to get these results. It would be less numerical work if we increased the timestep. Let's try only $100$ steps with $\Delta t = 10^{-4}$:


In [ ]:
dt = 1e-4
Nsteps = 100
y = numpy.sin(4.0*numpy.pi*x)**4
for n in range(Nsteps):
    update = update_heat(y, dt, dx)
    y += update
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()

In [ ]:
y = numpy.zeros_like(x)
y = numpy.sin(4.0*numpy.pi*x)**4
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(-1,3))
line, = ax.plot([], [])
pyplot.close()

def init():
    ax.set_xlabel(r"$x$")

def update(i, y):
    for n in range(1):
        y += update_heat(y, dt, dx)
    line.set_data(x, y)
    return line

animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=50, interval=100, blit=True)

This doesn't look good - it's horribly unstable, with the results blowing up very fast.

The problem is that the size of the timestep really matters. Von Neumann stability calculations can be used to show that only when

\begin{equation} \frac{\Delta t}{\Delta x^2} < \frac{1}{2} \end{equation}

are the numerical results stable, and hence trustable. This is a real problem when you want to improve accuracy by increasing the number of points, hence decreasing $\Delta x$: with $\Delta x = 10^{-3}$ we need $\Delta t < 5 \times 10^{-7}$ already!

Exercise

Check, by changing $\Delta x$ and re-running the simulations, that you see numerical instabilities when this stability bound is violated. You'll only need to take a few tens of timesteps irrespective of the value of $\Delta t$.

Full problem

Finally, we can evaluate the PDE at a specific point and re-arrange the equation. Assuming we know all the data at $t^n$, the only unknowns will be at $t^{n+1}$, giving the algorithm

\begin{align} {\bf y}^{n+1}_{j} &= {\bf y}^{n}_{j} + \Delta t \, {\bf f}^{n}_{j} - \frac{\Delta t}{2 \Delta x} \left( {\bf g}^{n}_{j+1} \frac{1}{2 \Delta x} \left( {\bf h}^{n}_{j+2} - {\bf h}^{n}_{j} \right) - {\bf g}^{n}_{j-1} \frac{1}{2 \Delta x} \left( {\bf h}^{n}_{j} - {\bf h}^{n}_{j-2} \right) \right) \\ &= {\bf y}^{n}_{j} + \Delta t \, {\bf f}^{n}_{j} - \frac{\Delta t}{4 \left( \Delta x \right)^2} \left( {\bf g}^{n}_{j+1} {\bf h}^{n}_{j+2} - \left( {\bf g}^{n}_{j+1} + {\bf g}^{n}_{j-1} \right) {\bf h}^{n}_{j} + {\bf g}^{n}_{j-1} {\bf h}^{n}_{j-2} \right) \end{align}

We'll implement that by writing a function that computes the update term (${\bf y}^{n+1}_j - {\bf y}^n_j$), choosing $n_i = 0.1, G = 0.1$:


In [ ]:
ni = 0.1
G = 0.1

def f(y):
    p = y[0,:]
    n = y[1,:]
    f_vector = numpy.zeros_like(y)
    f_vector[:,:] = ni**2 - n*p + G
    return f_vector

def g(y):
    p = y[0,:]
    n = y[1,:]
    g_vector = numpy.zeros_like(y)
    g_vector[0,:] = p
    g_vector[1,:] = n
    return g_vector

def h(y):
    p = y[0,:]
    n = y[1,:]
    h_vector = numpy.zeros_like(y)
    h_vector[0,:] = 1.0/p
    h_vector[1,:] = 1.0/n
    return h_vector

def update_term(y, dt, dx):
    dydt = numpy.zeros_like(y)
    f_vector = f(y)
    g_vector = g(y)
    h_vector = h(y)
    dydt[:,2:-2] += dt * f_vector[:,2:-2]
    dydt[:,2:-2] -= dt/(4.0*dx**2)*(g_vector[:,3:-1]*h_vector[:,4:] -\
                                  (g_vector[:,3:-1] + g_vector[:,1:-3])*h_vector[:,2:-2] + \
                                  g_vector[:,1:-3]*h_vector[:,:-4])
    return dydt

Now set the initial data to be

\begin{align} p &= n_i \left(1 + 0.1 \sin(4 \pi x) \right), \\ n &= n_i \left(1 + 0.1 \sin(6 \pi x) \right). \end{align}

The spatial domain should be $[0, 1]$. The spatial step size should be $0.05$. The timestep should be $10^{-7}$. The evolution should be for $10^5$ steps, to $t=0.01$.

The crucial point is what happens at the boundaries. To discretely represent a Neumann boundary condition where the normal derivative vanishes on the boundary, set the boundary points equal to the first data point in the interior, ie y[:,0] = y[:,1] = y[:,2] and y[:,-1] = y[:,-2] = y[:,-3].


In [ ]:
dx = 0.05
dt = 1e-7
Nsteps = 10000
x = numpy.arange(-dx,1+2*dx,dx)
y = numpy.zeros((2,len(x)))
y[0,:] = ni*(1.0+0.1*numpy.sin(4.0*numpy.pi*x))
y[1,:] = ni*(1.0+0.1*numpy.sin(6.0*numpy.pi*x))

In [ ]:
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y[0,:], label=r"$p$")
pyplot.plot(x, y[1,:], label=r"$n$")
pyplot.legend()
pyplot.xlabel(r"$x$")
pyplot.xlim(0, 1)
pyplot.show()

In [ ]:
for n in range(Nsteps):
    update = update_term(y, dt, dx)
    y += update
    y[:,1] = y[:,2]
    y[:,0] = y[:,1]
    y[:,-2] = y[:,-3]
    y[:,-1] = y[:,-2]

In [ ]:
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y[0,:], label=r"$p$")
pyplot.plot(x, y[1,:], label=r"$n$")
pyplot.legend()
pyplot.xlabel(r"$x$")
pyplot.xlim(0, 1)
pyplot.show()

In [ ]:
dx = 0.01
dt = 1e-8
Nsteps = 100000
x = numpy.arange(-dx,1+2*dx,dx)
y = numpy.zeros((2,len(x)))
y[0,:] = ni*(1.0+0.1*numpy.sin(4.0*numpy.pi*x))
y[1,:] = ni*(1.0+0.1*numpy.sin(6.0*numpy.pi*x))
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0.09,0.11))
line1, = ax.plot([], [], label="$p$")
line2, = ax.plot([], [], label="$n$")
pyplot.legend()
pyplot.close()

def init():
    ax.set_xlabel(r"$x$")

def update(i, y):
    for n in range(1000):
        update = update_term(y, dt, dx)
        y += update
        y[:,1] = y[:,2]
        y[:,0] = y[:,1]
        y[:,-2] = y[:,-3]
        y[:,-1] = y[:,-2]
        y += update_heat(y, dt, dx)
    line1.set_data(x, y[0,:])
    line2.set_data(x, y[1,:])
    return line1, line2

animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=100, interval=100, blit=True)
Exercise

Increase the resolution and see how the solutions converge.

Exercise

Find the stability limit for this system, by experimentation.

Exercise

Using the profile you find, solve the BVP to find the current densities.