Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli |
|
The purpose of this lecture is to review the bare-essential background for our discussion of numerical methods for partial different equations (PDEs). Most of what we will use is general knowledge of PDE classification, solutions techniques for each type of PDE, and some knowledge of the solution properties. This also includes questions related to well-posedness, boundary conditions, and other issues.
First some notation to get us started, we will use the notation $$ \frac{\partial u}{\partial x} = u_x $$ to denote a partial derivative of the function $u$ with respect to the variable $x$.
Higher order or mixed derivatives are expressed similarly as $$ \frac{\partial^2 u}{\partial x^2} = u_{xx} ~~~~~~ \frac{\partial^2 u}{\partial x \partial y} = u_{xy} $$ noting that in general that the order of the derivatives can matter.
Here we have singled out variables usually associated with cartesian space but it will also be useful to consider polar-coordinate derivatives such as $$ \frac{\partial u}{\partial \theta} = u_\theta $$ or in time $$ \frac{\partial u}{\partial t} = u_t. $$
Note that in the general framework for classifications that $x$ or $y$ could also be $t$ or any other variable. Formally taking a Fourier transform allows us to transform this general equation into $$ P(X,Y) = a X^2 + b X \cdot Y + c Y^2 + \cdots $$ which looks like a general, second order polynomial.
The terminology for PDE classification comes from the fact that the equation above describes general conic sections which are often classified as elliptic, parabolic, or hyperbolic based on their discriminant $$ b^2 - a c. $$
We then classify the PDE according to the sign of the discriminant so that
The reason why we care about classifications is that often there are a class of numerical methods that work well for one classification but not for others. We will investigate more on why and how this works as we introduce each type of PDE but for now lets examine some of the basic examples of PDEs from each classification.
Identifying the coefficients from above as $a = 1$, $b = 0$, and $c = 1$ then the discriminant tells us that Laplace's equations is elliptic in character since $$ b^2 - a c = -1. $$
One of the more useful ways to think about PDE classification that is more general is to examine how information propagates through the domain. For hyperbolic PDEs information travels at finite speeds (group velocity is real and finite). Parabolic PDEs describe information movement
Fourier analysis is one of the basic tools for analyzing and solving linear PDEs. For finite domains we usually use Fourier series to represent the solution which effectively is using the eigenfunctions of the differential operators as a basis for the solution. This is commonly introduced in PDE courses via separation of variables but there are other approaches to this as well. For infinite domains we will use Fourier transforms instead. Here we will briefly cover some of the fundamentals of Fourier analysis as they will be useful for analyzing some of the numerical methods we will employ.
The Fourier series of a function comes in a number of different flavors depending on the type of function we are considering. Recall that $\sin x$ is an odd function and $\cos x$ is an even function. On finite intervals then for odd functions we can use the Fourier $\sin$ series and for even functions we can use the Fourier $\cos$ series. If the function is neither odd or even then we can use the full series, often written as $$ f(x) = a_0 + \sum^\infty_{n=1} a_n \cos \left( \frac{n \pi x}{L} \right) + \sum^\infty_{n=1} b_n \sin \left( \frac{n \pi x}{L} \right) $$ where $$\begin{aligned} a_0 &= \frac{1}{L} \int^L_0 f(x) dx \\ a_n &= \frac{2}{L} \int^L_0 f(x) \cos \frac{n \pi x}{L} dx \\ b_n &= \frac{2}{L} \int^L_0 f(x) \sin \frac{n \pi x}{L} dx. \end{aligned}$$ These coefficients are projections of the function $f(x)$ onto the Fourier basis of $\cos$ and $\sin$ which forms an orthogonal basis.
The Fourier transform of a function $v(x)$ is defined by $$ \hat{v}(\xi) = \frac{1}{\sqrt{2 \pi}} \int^\infty_{-\infty} v(x) e^{-i \xi x} dx. $$ In order for the transform to exist we require that the function $v \in L^2$, that is $$ ||v||_2 = \left ( \int^\infty_{-\infty} |v(x)|^2 dx \right )^{1/2} < \infty. $$ The function $\hat{v}(\xi) \in L^2$ since from Perseval's identity we have $$ ||v(x)||_2 = ||\hat{v}(\xi)||_2. $$ We can also use the inverse transform to get back the original function: $$ v(x) = \frac{1}{\sqrt{2 \pi}} \int^\infty_{-\infty} \hat{v}(\xi) e^{i \xi x} d\xi. $$
Elliptic equations are most often boundary value problems on some domain. Above we mentioned Laplace's equation which is a special case of another classic elliptic equation, the Poisson problem $$ \nabla^2 u = f. $$ These types of problems are often found when we want to find the steady-state of some time dependent problem such as the heat equation. These also can arise in time-dependent problems where there are some dynamics that is much faster than the time-scales of the rest of the problem, one example is the incompressibility condition used in the Navier-Stokes equation (the Poisson problem for pressure).
The essential behavior of elliptic boundary value problems is that the solution in the interior of the domain $\Omega$ is determined completely by the boundary conditions and the function $f$. This is important when considering numerical methods for these types of problems as we often must solve large, sparse system of linear equations which couple every point to every other point.
In general a linear operator $L$ of the form $$ L = \sum^n_{i,j=1} A_{ij} \frac{\partial^2}{\partial x_i \partial x_j} + \sum^n_{i=1} B_i \frac{\partial}{\partial x_i} + C $$ can be termed elliptic if the matrix $A$ is positive or negative definite (i.e. $v^T A v$ has the same sign for $v \neq 0$ and $v \in \mathbb{R}^n$).
As an example of solving an elliptic PDE let's consider Laplace's equation on a rectangular domain $$\begin{aligned} &u_{xx} + u_{yy} = 0 ~~~ \Omega = (0, L) \times (0, H) \\ &u(x,0) = f_1(x) \\ &u(x,H) = f_2(x) \\ &u(0,y) = g_1(y) \\ &u(L,y) = g_2(y). \end{aligned}$$ Since we have non-homogeneous boundary conditions on each side of the rectangle we cannot directly use separation of variables and instead break down the problem into 4 sub-parts so that $$ u(x,y) = u_1(x,y) + u_2(x,y) + u_3(x,y) + u_4(x,y) $$ where each $u_i(x,y)$ has one of the non-homogeneous sides of the rectangle retained and the others are kept at 0. For instance for u_1(x,y) we have $$\begin{aligned} \nabla^2 u_1 = 0 ~~~ \Omega = (0, L) \times (0, H) \\ &u_1(x,0) = f_1(x) \\ &u_1(x,H) = 0 \\ &u_1(0,y) = 0 \\ &u_1(L,y) = 0. \end{aligned}$$ Using separation of variables we assume a solution of the form $u_4(x,y) = h(x) \phi(y)$ which substituted into the PDE we find $$\begin{aligned} &\phi \frac{\text{d}^2 h}{\text{d} x^2} + h \frac{\text{d}^2 \phi}{\text{d} y^2} = 0 \\ &\frac{1}{h} \frac{\text{d}^2 h}{\text{d} x^2} = - \frac{1}{\phi} \frac{\text{d}^2 \phi}{\text{d} y^2} = -\lambda \Rightarrow \\ &h''(x) + \lambda h(x) = 0 ~~~~~~~ \phi''(y) - \lambda \phi(y) = 0. \end{aligned}$$ The boundary conditions for the two functions are $$ h(0) = h(L) = 0 ~~~~ \phi(0) = f_1(x) ~~~~ \phi(H) = 0. $$ First solving the easier homogeneous BVP for $h(x)$ we find $$ h(x) = A \cos (\sqrt{\lambda} x) + B \sin(\sqrt{\lambda} x) $$ which using the BCs leads us to $$\begin{aligned} &h(0) = 0 = A \\ &h(L) = 0 = B \sin(\sqrt{\lambda} L). \end{aligned}$$ The latter we have two choices to satisfy the BC, we could assume $B=0$ leading to the trivial solution or that $$ \sqrt{\lambda} L = n \pi. $$ Using the latter condition leads to the eigenvalues $$ \lambda_n = \left(\frac{n \pi}{L}\right)^2, ~~~~ n = 1,2,\ldots $$ so that we have the solutions $$ h_n(x) = B_n \sin\left(\frac{n \pi x}{L}\right). $$
For the $\phi$ equation we have solutions of the form $$ \phi(y) = A e^{\sqrt{\lambda} y} + B e^{-\sqrt{\lambda} y} $$ or more familiarly $$ \phi(y) = A \cosh(\sqrt{\lambda} y) + B \sinh(\sqrt{\lambda} y). $$ To satisfy the homogeneous boundary condition we need to modify these solutions slightly so that they are shifted $$ \phi(y) = A \cosh \left(\sqrt{\lambda} (y - H)\right) + B \sinh \left(\sqrt{\lambda} (y - H)\right) $$ which allows us to satisfy the homogeneous BC by $$\begin{aligned} \phi(H) = 0 &= A \cosh \left(\sqrt{\lambda} (H - H)\right) + B \sinh \left(\sqrt{\lambda} (H - H)\right) \\ &=A \cosh (0) + B \sinh (0) \\ &=A \end{aligned}$$ and we are then left with $$ \phi(y) = B \sinh \left(\frac{n \pi}{L} (y - H)\right) $$
Combining the two solutions we then have $$ u_{1,n}(x,y) = B_n \sin\left(\frac{n \pi x}{L}\right) \sinh \left(\frac{n \pi}{L} (y - H)\right) $$ and by superposition we have $$ u_1(x,y) = \sum^\infty_{i=1} B_n \sin\left(\frac{n \pi x}{L}\right) \sinh \left(\frac{n \pi}{L} (y - H)\right) $$ Now, to satisfy the last boundary condition and find the coefficients $B_n$ we have $$ u_1(x,0) = f_1(x) = \sum^\infty_{i=1} -B_n \sin\left(\frac{n \pi x}{L}\right) \sinh \left(\frac{n \pi H} {L}\right) $$ and using the orthogonality of the $\sin$ functions we have $$\begin{aligned} \int^L_0 f_1(x) \sin\left(\frac{m \pi x}{L}\right) dx &= \sum^\infty_{i=1} -B_n \sinh \left(\frac{n \pi H}{L} \right ) \int^L_0 \sin\left(\frac{n \pi x}{L}\right) \sin\left(\frac{m \pi x}{L}\right) dx \\ \int^L_0 f_1(x) \sin\left(\frac{m \pi x}{L}\right) dx &= -B_m \frac{L}{2} \sinh \left(\frac{n \pi H}{L} \right ) \\ B_m &= \frac{-2}{L \sinh \left(\frac{m \pi H}{L} \right )} \int^L_0 f_1(x) \sin\left(\frac{m \pi x}{L}\right) dx. \end{aligned}$$ We then have found the solution to the $u_1$ component of the general problem above.
Parabolic equations are similar to elliptic equations however often also include time-dependence (this is not always the case but will be for our considerations). We can for instance take any elliptic linear operator as defined above and create a parabolic equation by writing $$ u_t - Lu = f. $$ The classic example of a parabolic equation is the heat (or diffusion) equation defined as $$ u_t = \nabla^2 u. $$ These PDEs require both an initial condition $u(x, 0)$ as well as boundary conditions on $\Omega$.
Similar to elliptic problems we again will use Fourier series to solve parabolic equations. Let us again consider our model problem, the heat equation, in one spatial dimension: $$\begin{aligned} &u_t = \kappa u_{xx} ~~~ \Omega = (0, T) \times (0, 1) \\ &u(x, 0) = f(x) \\ &u(0, t) = 0 ~~~ \text{and} \\ &u(1, t) = 0. \end{aligned}$$
Using separation of variables we assume $u(x,t) = G(t) \phi(x)$ and can then write the two ODEs $$\begin{aligned} &\frac{\text{d} G}{\text{d} t} = -\kappa \lambda G \\ &\phi''(x) + \lambda \phi(x) = 0. \end{aligned}$$ The time dependent ODE has solutions of the form $$ G(t) = C e^{-\lambda \kappa t}. $$ The BVP for $\phi$ has solutions of the form $$ \phi(x) = A \cos \left( \sqrt{\lambda} x \right) + B \sin \left( \sqrt{\lambda} x \right). $$ Using the boundary conditions we find that $A = 0$ and for non-trivial solutions that $$ \lambda = \left( \frac{n \pi}{L} \right )^2 $$ leading to solutions of the form $$ u_n(x,t) = B_n e^{-\left(\frac{n \pi}{L}\right)^2 \kappa t} \sin \left( \frac{n \pi x}{L} \right) $$ and by superposition we have $$ u(x,t) = \sum^\infty_{i=1} B_n e^{-\left(\frac{n \pi}{L}\right)^2 \kappa t} \sin \left( \frac{n \pi x}{L} \right). $$ Evaluating the initial condition we have $$ u(x,0) = f(x) = \sum^\infty_{i=1} B_n \sin \left( \frac{n \pi x}{L} \right) $$ which gives us the $B_n$ by again using the orthogonality of the $\sin$ functions $$\begin{aligned} \int^L_0 f(x) \sin \left( \frac{m \pi x}{L} \right) dx &= \sum^\infty_{i=1} B_n \int^L_0\sin \left( \frac{n \pi x}{L} \right) \sin \left( \frac{m \pi x}{L} \right) dx \\ \int^L_0 f(x) \sin \left( \frac{m \pi x}{L} \right) dx &= B_m \frac{L}{2} \\ B_m &= \frac{2}{L} \int^L_0 f(x) \sin \left( \frac{m \pi x}{L} \right) dx \end{aligned}$$ therefore solving the PDE.
Hyperbolic equations often arise as conservation or balance laws. For instance the conservation of momentum (Newton's second law) is an example of a balance law. Similarly energy conservation can be written as a hyperbolic equation. The classic hyperbolic equation is of course the wave equation $$ u_{tt} - c^2 u_{xx} = 0 $$ however often when considering numerical methods for hyperbolic equations we decompose second order PDEs into a system of first order PDEs such as $$ u_t + A u_x = 0 $$ where now $A$ is a matrix. In this case the PDE is hyperbolic if the matrix $A$'s eigenvalues $\lambda$ are real and the matrix is diagonalizable.
The most common approach to solving hyperbolic PDEs is to find the characteristics of the given problem and drawing on the $x$-$t$ plane places where something interesting might be happening. If the equation is non-linear this could lead to shocks (discontinuities) even though the initial condition was smooth. This will be problematic for us numerically as we will see.
The basic approach to the method of characteristics employs the assumption that our solution is of the form $$ u(x,t) = u(x(t), t), $$ in other words our solution can be described by a single variable $t$. Let us suppose that we are trying to find the solution to $$ u_t + c(x,t,u) u_x = 0. $$ If we take the full (or total) derivative of the new solution we find $$ \frac{\text{d} u}{\text{d} t} = \frac{\partial u}{\partial t} + \frac{\text{d} x}{\text{d} t} \frac{\partial u}{\partial x} $$ that, after transforming the original equation to use $\xi$, leads us to the conclusion that our original PDE can be rewritten to be two ODEs $$\begin{aligned} &\frac{\text{d} u}{\text{d} t} = 0 ~~~\text{and}\\ &\frac{\text{d} x}{\text{d} t} = c(x,t,u). \end{aligned}$$ This leads us to the conclusion that $u(x(t),t)$ is constant along the curves $x(t)$ and in fact $$ u(x,t) = u(x(t), 0). $$ The solution to the PDE then turns to solving for the characteristic curves.
As a simple example lets solve the linear advection equation on an infinite domain $$\begin{aligned} &u_t + c u_x = 0, ~~~ \Omega = (-\infty, \infty) \times (0, T) \\ &u(x,0) = f(x) ~~~ \text{and} \\ &u_x(\pm \infty, t) \rightarrow 0. \end{aligned}$$ The characteristic equation is $$ \frac{\text{d} x}{\text{d} t} = c $$ whose solution is $x(t) = c t + x_0$. We can draw the characteristics then as
where the equations for the lines are $x(t) = c t + x_0$.
For nonlinear equations, i.e. ones where the speed may be dependent on the solution $c(u)$, the initial condition will determine the characteristic slopes leading to two cases. Let's consider Burger's equation $$ u_t + u u_x = 0 $$ with the same setup as the linear PDE above. If we for instance have the intial condition $$ u(x,0) = \left \{ \begin{aligned} 4 & & x<0 \\ 3 & & x>0 \end{aligned} \right . $$ then the characteristic curves will be $$ x(t) = \left \{ \begin{aligned} 4 t + x_0 & & x<0 \\ 3 t + x_0 & & x>0 \end{aligned} \right . $$ which lead to characteristic curve collision and therfore a shock.
We can find the equation for the location of the shock by using the Rankine-Hugoniot condition $$ \frac{\text{d} x_s}{\text{d} t} = \frac{f(x_+, t) - f(x_-, t)}{u_+ - u_-} $$ where $x_s(t)$ is the location of the shock, $f$ is the flux function defined from $u_t + f(u)_x = 0$, and the notation $u_\pm$ denotes values to the left and right of the shock depending on the sign.
If instead we had the initial condition $$ u(x,0) = \left \{ \begin{aligned} 3 & & x<0 \\ 4 & & x>0 \end{aligned} \right . $$ then the characteristic curves will be $$ x(t) = \left \{ \begin{aligned} 3 t + x_0 & & x<0 \\ 4 t + x_0 & & x>0 \end{aligned} \right . $$ leaving a section of the x-t plane without and characteristics. Instead we assume a smoothly varying solution, called a rarefaction, and fill in the missing solution.