Boundary Value Problems

In Giles Richardson's notes the simple one-dimensional inorganic solar cell model is presented. This model includes equations for the hole current density $j_n$ and the electron current density $j_p$ in terms of the hole and electron densities $n$ and $p$, the intrinsic carrier density $n_i$, the thermal generation rate $G$, and material dependent constants. After non-dimensionalization we end up with equations of the form

\begin{align} \frac{\text{d} j_p}{\text{d} x} &= \Theta \left( n_i^2 - n p \right) + G, \\ \frac{\text{d} j_n}{\text{d} x} &= -\Theta \left( n_i^2 - n p \right) - G. \end{align}

These two ODEs require two boundary conditions, which are (thanks to the symmetry of the problem)

\begin{align} \left. \left( j_p - j_n \right) \right|_{x=0} &=0, \\ \left. j_n \right|_{x=1} &= 0. \end{align}

We will assume that $n_i$ and $G$ are given constants, and that the charge densities $n, p$ have been found as functions of space already: in this case we'll use

\begin{align} n &= \exp(-x-x^2/100) \left(1 - x^2 \right)^2, \\ p & = \exp(x-x^2/100) \left(1 - x^2 \right)^2. \end{align}

In [ ]:
from __future__ import division
import numpy
from matplotlib import pyplot
%matplotlib notebook

In [ ]:
def n(x):
    return numpy.exp(-x) * numpy.exp(-x**2/100) * (1-x**2)**2
def p(x):
    return numpy.exp(x) * numpy.exp(-x**2/100) * (1-x**2)**2

In [ ]:
x = numpy.linspace(0, 1)

pyplot.plot(x, n(x), label=r"$n$")
pyplot.plot(x, p(x), label=r"$p$")

This is a Boundary Value Problem. It's an ordinary differential equation where the boundary conditions are given at different points - here at $x=0$ and $x=1$.

Boundary value problems can be problematic: even when properly set up (same number of boundary conditions as equations, reasonable domain) they need not have any solutions, or they can have a unique solution, or they can have multiple - even infinitely many - solutions! Adding numerics just adds difficulty. However, it's still perfectly feasible to find solutions, when they exist.


We can use a lot of the technology and methods we've seen already to solve boundary value problems. This relies on one key feature: if we have a solution to the initial value problem with the same differential equation, with boundary conditions at the start that match the BVP, and a solution that matches the BVP at the end, then it is a solution of the BVP.

To phrase that for the problem above: if we have a value $J$ for $j_p$ at $x=0$ then we know (from the boundary condition at $x=0$) that $j_n=J$. We then solve the initial value problem

\begin{equation} \frac{\text{d}}{\text{d}x} \begin{pmatrix} j_p \\ j_n \end{pmatrix} = \begin{pmatrix} \Theta \left( n_i^2 - n p \right) + G \\ -\Theta \left( n_i^2 - n p \right) - G \end{pmatrix}, \qquad \begin{pmatrix} j_p \\ j_n \end{pmatrix}(0) = \begin{pmatrix} J \\ J \end{pmatrix}. \end{equation}

This gives us both $j_p$ and $j_n$ as functions of $x$. Our solutions clearly depend on the initial value $J$. If our value of $J$ is such that $j_n(1) = 0$ then we match the boundary condition at $x=1$. We've then built a solution that solves the differential equation, and matches all the boundary conditions: it is a solution of the BVP.

We can solve the initial value problem using any of the techniques used earlier: here we'll use odeint. The solution will be $j_n(x;J)$ and $j_p(x;J)$, showing how the solution depends on the initial data. We can then evaluate this solution at $x=1$: we want

\begin{equation} F(J) = j_n(1;J) = 0. \end{equation}

This is a nonlinear root-finding problem, where evaluating the function whose root we are trying to find involves solving an initial value problem.

Let's implement this, assuming $\Theta = 0.9, G = 1, n_i = 0.6$. The critical value of $J$ is between $0$ and $5$.

In [ ]:

In [ ]:


Repeat this using Euler's method and bisection to see how much difference it makes.