If there's anything that needs serious computing power it's the solution of PDEs. However, you can go a long way to getting intuition on complex problems with simple numerical methods.
Let's take parts of the doped semiconductor model, concentrating on the time-dependent behaviour, simplify the constants, and restrict to one spatial dimension:
\begin{align} \frac{\partial p}{\partial t} + \frac{\partial}{\partial x} \left( p \frac{\partial}{\partial x} \left( \frac{1}{p} \right) \right) &= \left( n_i^2 - np \right) + G, \\ \frac{\partial n}{\partial t} + \frac{\partial}{\partial x} \left( n \frac{\partial}{\partial x} \left( \frac{1}{n} \right) \right) &= \left( n_i^2 - np \right) + G. \end{align}This is a pair of coupled PDEs for $n, p$ in terms of physical and material constants, and the quasi-Fermi levels $E_{F_{n,p}}$ depend on $n, p$ - here we've replaced them with terms proportional to $\{n, p\}^{-1}$.
We'll write this in the form
\begin{equation} \frac{\partial {\bf y}}{\partial t} + \frac{\partial}{\partial x} \left( {\bf g}({\bf y}) \frac{\partial}{\partial x} {\bf h}({\bf y}) \right) = {\bf f}({\bf y}). \end{equation}We used finite differencing when looking at IVPs. We introduced a grid of points $x_j$ in space and replace derivatives with finite difference expressions. For example, we saw the forward difference approximation
\begin{equation} \left. \frac{\text{d} y}{\text{d} x} \right|_{x = x_j} = \frac{y_{j+1} - y_j}{\Delta x} + {\cal O} \left( \Delta x^1 \right), \end{equation}and the central difference approximation
\begin{equation} \left. \frac{\text{d} y}{\text{d} x} \right|_{x = x_j} = \frac{y_{j+1} - y_{j-1}}{2 \Delta x} + {\cal O} \left( \Delta x^2 \right). \end{equation}This extends to partial derivatives, and to more than one variable. We introduce a grid in time, $t^n$, and denote $y(t^n, x_j) = y^n_j$. Then we can do, say, forward differencing in time and central differencing in space:
\begin{align} \left. \frac{\partial y}{\partial t} \right|_{x = x_j, t = t^n} &= \frac{y^{n+1}_{j} - y^{n}_{j}}{\Delta t}, \\ \left. \frac{\partial y}{\partial x} \right|_{x = x_j, t = t^n} &= \frac{y^{n}_{j+1} - y^{n}_{j-1}}{2 \Delta x}. \end{align}To illustrate the simplest way of proceeding we'll look at the diffusion equation
\begin{equation} \frac{\partial y}{\partial t} - \frac{\partial^2 y}{\partial x^2} = 0. \end{equation}When finite differences are used, with the centred second derivative approximation
\begin{equation} \left. \frac{\text{d}^2 y}{\text{d} x^2} \right|_{x = x_j} = \frac{y_{j+1} + y_{j-1} - 2 y_{j}}{\Delta x^2} + {\cal O} \left( \Delta x^2 \right), \end{equation}we find the approximation
\begin{align} && \frac{y^{n+1}_{j} - y^{n}_{j}}{\Delta t} - \frac{y^{n}_{j+1} + y^{n}_{j-1} - 2 y^{n}_{j}}{\Delta x^2} &= 0 \\ \implies && y^{n+1}_{j} &= y^{n}_{j} + \frac{\Delta t}{\Delta x^2} \left( y^{n}_{j+1} + y^{n}_{j-1} - 2 y^{n}_{j} \right). \end{align}Given initial data and boundary conditions, this can be solved.
Let's implement the heat equation with homogeneous Dirichlet boundary conditions ($y(t, 0) = 0 = y(t, 1)$) and simple initial data ($y(0, x) = x (1 - x)$), using a spatial step size of $\Delta x = 10^{-2}$ and a time step of $\Delta t = 10^{-5}$, solving to $t = 0.1$ ($10000$ steps).
In [ ]:
from __future__ import division
import numpy
from matplotlib import pyplot
%matplotlib notebook
In [ ]:
dt = 1e-5
dx = 1e-2
x = numpy.arange(0,1+dx,dx)
y = numpy.zeros_like(x)
y = x * (1 - x)
In [ ]:
In [ ]:
In [ ]:
The solution looks good - smooth, the initial profile is diffusing nicely. Try with something a bit more complex, such as $y(0, x) = \sin^4(4 \pi x)$ to see the diffusive effects:
In [ ]:
In [ ]:
In [ ]:
All the features are smoothing out, as expected.
However, we used a really small timestep to get these results. It would be less numerical work if we increased the timestep. Let's try only $100$ steps with $\Delta t = 10^{-4}$:
In [ ]:
This doesn't look good - it's horribly unstable, with the results blowing up very fast.
The problem is that the size of the timestep really matters. Von Neumann stability calculations can be used to show that only when
\begin{equation} \frac{\Delta t}{\Delta x^2} < \frac{1}{2} \end{equation}are the numerical results stable, and hence trustable. This is a real problem when you want to improve accuracy by increasing the number of points, hence decreasing $\Delta x$: with $\Delta x = 10^{-3}$ we need $\Delta t < 5 \times 10^{-7}$ already!
Finally, we can evaluate the PDE at a specific point and re-arrange the equation. Assuming we know all the data at $t^n$, the only unknowns will be at $t^{n+1}$, giving the algorithm
\begin{align} {\bf y}^{n+1}_{j} &= {\bf y}^{n}_{j} + \Delta t \, {\bf f}^{n}_{j} - \frac{\Delta t}{2 \Delta x} \left( {\bf g}^{n}_{j+1} \frac{1}{2 \Delta x} \left( {\bf h}^{n}_{j+2} - {\bf h}^{n}_{j} \right) - {\bf g}^{n}_{j-1} \frac{1}{2 \Delta x} \left( {\bf h}^{n}_{j} - {\bf h}^{n}_{j-2} \right) \right) \\ &= {\bf y}^{n}_{j} + \Delta t \, {\bf f}^{n}_{j} - \frac{\Delta t}{4 \left( \Delta x \right)^2} \left( {\bf g}^{n}_{j+1} {\bf h}^{n}_{j+2} - \left( {\bf g}^{n}_{j+1} + {\bf g}^{n}_{j-1} \right) {\bf h}^{n}_{j} + {\bf g}^{n}_{j-1} {\bf h}^{n}_{j-2} \right) \end{align}We'll implement that by writing a function that computes the update term (${\bf y}^{n+1}_j - {\bf y}^n_j$), choosing $n_i = 0.1, G = 0.1$:
In [ ]:
ni = 0.1
G = 0.1
def f(y):
p = y[0,:]
n = y[1,:]
f_vector = numpy.zeros_like(y)
f_vector[:,:] = ni**2 - n*p + G
return f_vector
def g(y):
p = y[0,:]
n = y[1,:]
g_vector = numpy.zeros_like(y)
g_vector[0,:] = p
g_vector[1,:] = n
return g_vector
def h(y):
p = y[0,:]
n = y[1,:]
h_vector = numpy.zeros_like(y)
h_vector[0,:] = 1.0/p
h_vector[1,:] = 1.0/n
return h_vector
def update_term(y, dt, dx):
Now set the initial data to be
\begin{align} p &= n_i \left(1 + 0.1 \sin(4 \pi x) \right), \\ n &= n_i \left(1 + 0.1 \sin(6 \pi x) \right). \end{align}The spatial domain should be $[0, 1]$. The spatial step size should be $0.05$. The timestep should be $10^{-7}$. The evolution should be for $10^5$ steps, to $t=0.01$.
The crucial point is what happens at the boundaries. To discretely represent a Neumann boundary condition where the normal derivative vanishes on the boundary, set the boundary points equal to the first data point in the interior, ie y[:,0] = y[:,1] = y[:,2]
and y[:,-1] = y[:,-2] = y[:,-3]
.
In [ ]:
dx = 0.05
dt = 1e-7
Nsteps = 10000
x = numpy.arange(-dx,1+2*dx,dx)
y = numpy.zeros((2,len(x)))
y[0,:] = ni*(1.0+0.1*numpy.sin(4.0*numpy.pi*x))
y[1,:] = ni*(1.0+0.1*numpy.sin(6.0*numpy.pi*x))
In [ ]:
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y[0,:], label=r"$p$")
pyplot.plot(x, y[1,:], label=r"$n$")
pyplot.legend()
pyplot.xlabel(r"$x$")
pyplot.xlim(0, 1)
pyplot.show()
In [ ]:
In [ ]:
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y[0,:], label=r"$p$")
pyplot.plot(x, y[1,:], label=r"$n$")
pyplot.legend()
pyplot.xlabel(r"$x$")
pyplot.xlim(0, 1)
pyplot.show()