This is a Jupyter notebook using Python. You can install Jupyter locally to edit and interact with this notebook.
Whe solving elliptic boundary value problems as well as when using implicit methods for transient PDE, we need to solve algebraic systems of equations. We will write such systems as $$ F(u) = 0 $$ where $u$ is a vector of state variables and $F(u)$ is a vector of residuals of the same length. We will primarily be interested in defect correction methods of the form \begin{gather} A \delta u = - F(u) \\ u \gets u + \gamma \delta u \end{gather} where $A$ is a matrix and $\gamma$ is a scalar parameter often found using a line search.
The Jacobian of $F$ is $$ J(u) = \frac{\partial F}{\partial u}(u) = \begin{bmatrix} \frac{\partial F_0}{\partial u_0} & \frac{\partial F_0}{\partial u_1} & \dotsb \\ \frac{\partial F_1}{\partial u_0} & \frac{\partial F_1}{\partial u_1} & \\ \vdots & & \ddots \end{bmatrix}(u) . $$ The method can be derived by taking the Taylor expansion of $F(u)$ at $u$, $$ F(u + \delta u) = F(u) + \frac{\partial F}{\partial u}(u) (\delta u) + \frac{\partial^2 F}{\partial u^2}(u) (\delta u \otimes \delta u) / 2 + \dotsb $$ Note that each higher term is a higher rank tensor, thus computationally unweildy. If we truncate the series with the linear term and set equal to zero, we have a linear equation for $\delta u$ $$ \frac{\partial F}{\partial u}(u) \delta u = - F(u) $$ which will hopefully make $F(u + \partial u) \approx 0$. This is Newton's method.
In [28]:
%matplotlib inline
import numpy
from matplotlib import pyplot
pyplot.style.use('ggplot')
def fsolve_newton(F, J, u0, rtol=1e-10, maxit=50, verbose=False):
u = u0.copy()
Fu = F(u)
norm0 = numpy.linalg.norm(Fu)
enorm_last = numpy.linalg.norm(u - numpy.array([1,1]))
for i in range(maxit):
du = -numpy.linalg.solve(J(u), Fu)
u += du
Fu = F(u)
norm = numpy.linalg.norm(Fu)
if verbose:
enorm = numpy.linalg.norm(u - numpy.array([1,1]))
print('Newton {:d} anorm {:6.2e} rnorm {:6.2e} eratio {:6.2f}'.
format(i+1, norm, norm/norm0, enorm/enorm_last**2))
enorm_last = enorm
if norm < rtol * norm0:
break
return u, i
def rostest(a,b):
def F(u):
x = u[0]; y = u[1]
return numpy.array([-2*(a-x) + 4*b*x**3 - 4*b*x*y,
2*b*(y-x**2)])
def J(u):
x = u[0]; y = u[1]
return numpy.array([[2 + 12*b*x**2 - 4*b*y, -4*b*x],
[-4*b*x, 2*b]])
return F, J
F, J = rostest(1,3)
fsolve_newton(F, J, numpy.array([0, 1.]), verbose=True)
Out[28]:
It can be error-prone and complicated to implement the Jacobian function J(u)
. In such cases, we can use the approximation
where $\epsilon$ is some "small" number. Now can't access individual entries of $J$, but we can apply its action to an arbitrary vector $u$.
We know that this approximation is first order accurate in $\epsilon$, $$ \left\lVert J(u) v - \frac{F(u+\epsilon v) - F(u)}{\epsilon} \right\rVert \in O(\epsilon) . $$ But if $\epsilon$ is too small, we will lose accuracy due to rounding error. If $F$ has been scaled such that its norm is of order 1, then $\epsilon = \sqrt{\epsilon_{\text{machine}}}$ is a good default choice.
In [31]:
import scipy.sparse.linalg as splinalg
def fsolve_newtonkrylov(F, u0, epsilon=1e-8, rtol=1e-10, maxit=50, verbose=False):
u = u0.copy()
Fu = F(u)
norm0 = numpy.linalg.norm(Fu)
for i in range(maxit):
def Ju_fd(v):
return (F(u + epsilon*v) - Fu) / epsilon
Ju = splinalg.LinearOperator((len(Fu),len(u)), matvec=Ju_fd)
du, info = splinalg.gmres(Ju, Fu)
if info != 0:
raise RuntimeError('GMRES failed to converge: {:d}'.format(info))
u -= du
Fu = F(u)
norm = numpy.linalg.norm(Fu)
if verbose:
print('Newton {:d} anorm {:6.2e} rnorm {:6.2e}'.format(i, norm, norm/norm0))
if norm < rtol * norm0:
break
return u, i
fsolve_newtonkrylov(F, numpy.array([0.,1]), rtol=1e-6, verbose=True)
Out[31]:
Many matrices in applications, particularly the study of physical systems and graphs/networks, have entries that are mostly equal to zero. We can more efficiently store such systems by storing only the nonzero elements. We will discuss storage and optimized implementations later. Many of the methods for sparse systems apply to solving systems with matrices $A$ that can be applied to a vector ($y \gets A x$) in significantly less than $O(n^2)$ complexity, or that are "well-conditioned" such that an iterative method converges in significantly less than $n$ iterations.
PETSc, the Portable Extensible Toolkit for Scientific computing, is an open source software package for the parallel solution of algebraic and differential-algebraic equations. This includes linear algebra, for which PETSc offers a broad variety of implementations. For general information about PETSc, I refer to this primer.
NumPy does not provide sparse matrix support so we will use SciPy in this course.
The complexity of this solve is potentially dominant, so we should understand its cost in terms of the problem size. The standard method for a direct solve is $LU$ (or Cholesky) factorization. Given a $2\times 2$ block matrix, the algorithm proceeds as \begin{split} \begin{bmatrix} A & B \\ C & D \end{bmatrix} &= \begin{bmatrix} L_A & \\ C U_A^{-1} & 1 \end{bmatrix} \begin{bmatrix} U_A & L_A^{-1} B \\ & S \end{bmatrix} \end{split} where $L_A U_A = A$ and $S = D - C A^{-1} B$.
For a sparse operator, the complexity depends on the ordering of degrees of freedom.
For a structured grid, the "natural" ordering is the ordering of the original operator.
A sparse direct solve of this system will result in fill up to the bandwidth.
These plots can be produced in PETSc using -mat_view draw
and -pc_type lu -pc_factor_mat_ordering_type natural -mat_factor_view draw
(e.g., with -draw_pause 2
to wait 2 seconds for the viewer).
The Reverse Cuthill-McKee (rcm
) ordering applies a breadth-first search to produce a low-bandwidth ordering.
The nested dissection (nd
) ordering recursively bisects the domain.
The asymptotic costs are different for these approaches.
The simplest iterative method is Richardson's method, which solves $A x = b$ by the iteration $$ x_{k+1} = x_k + \omega (b - A x_k) $$ where $\omega > 0$ is a damping parameter and $x_0$ is an initial guess (possibly the zero vector). If $b = A x_*$, this iteration is equivalent to $$ x_{k+1} - x_* = (x_k - x_*) - \omega A (x_k - x_*) = (I - \omega A) (x_k - x_*) .$$ It is convenient for convergence analysis to identify the "error" $e_k = x_k - x_*$, in which this becomes $$ e_{k+1} = (I - \omega A) e_k $$ or $$ e_k = (I - \omega A)^k e_0 $$ in terms of the initial error. Evidently powers of the iteration matrix $I - \omega A$ tell the whole story. Suppose that the eigendecomposition $$ X \Lambda X^{-1} = I - \omega A $$ exists. Then $$ (I - \omega A)^k = (X \Lambda X^{-1})^k = X \Lambda^k X^{-1} $$ and the convergence (or divergence) rate depends only on the largest magnitude eigenvalue. This analysis isn't great for two reasons:
We can repair these weaknesses by using the Schur decomposition $$ Q R Q^h = I - \omega A $$ where $R$ is right-triangular and $Q$ is unitary (i.e., orthogonal if real-valued; $Q^h$ is the Hermitian conjugate of $Q$). The Schur decomposition always exists and $Q$ has a condition number of 1.
Evidently we must find $\omega$ to minimize the maximum eigenvalue of $I - \omega A$. We can do this if $A$ is well conditioned, but not in general.
Preconditioning is the act of creating an "affordable" operation "$P^{-1}$" such that $P^{-1} A$ (or $A P^{-1}$) is is well-conditoned or otherwise has a "nice" spectrum. We then solve the system
$$ P^{-1} A x = P^{-1} b \quad \text{or}\quad A P^{-1} \underbrace{(P x)}_y = b $$in which case the convergence rate depends on the spectrum of the iteration matrix $$ I - \omega P^{-1} A . $$
There are two complementary techniques necessary for efficient iterative methods:
Although there is ongoing research in Krylov methods and they are immensely useful, I would say preconditioning is 90% of the game for practical applications, particularly as a research area.
All matrix iterations work with approximations in a Krylov subspace, which has the form
$$ K_n = \big[ b \big| Ab \big| A^2 b \big| \dotsm \big| A^{n-1} b \big] . $$This matrix is horribly ill-conditioned and cannot stably be computed as written. Instead, we seek an orthogonal basis $Q_n$ that spans the same space as $K_n$. We could write this as a factorization
$$ K_n = Q_n R_n $$where the first column $q_0 = b / \lVert b \rVert$. The $R_n$ is unnecessary and hopelessly ill-conditioned, so a slightly different procedure is used.
The Arnoldi iteration applies orthogonal similarity transformations to reduce $A$ to Hessenberg form, starting from a vector $q_0 = b$,
$$ A = Q H Q^h . $$Let's multiply on the right by $Q$ and examine the first $n$ columns,
$$ A Q_n = Q_{n+1} H_n $$where $H_n$ is an $(n+1) \times n$ Hessenberg matrix.
This representation is well-conditioned because $Q$ is orthogonal and
$$ \lVert H_n \rVert \le \lVert Q_{n+1}^h \rVert \lVert A \rVert \lVert Q_n \rVert \le \lVert A \rVert $$.
For a lower bound, we have
$$ \sigma_{\min}(A)^2 \le x^h A^h A x $$for all $x$ of norm 1. It must also be true for any $x = Q_n y$ where $\lVert y\rVert = 1$, thus
$$ \sigma_{\min}(A)^2 \le y^h Q_n^h A^h A Q_n y = y^h H_n^h Q_{n+1}^h Q_{n+1} H_n y = y^h H_n^h H_n y . $$GMRES (Generalized Minimum Residual) minimizes $$ \lVert A x - b \rVert $$ over the subspace $Q_n$. I.e., $x = Q_n y$ for some $y$. By the recurrence above, this is equivalent to $$ \lVert Q_{n+1} H_n y - b \lVert $$ which can be solved by minimizing $$ \lVert H_n y - Q_{n+1}^h b \rVert . $$ Since $q_0 = b/\lVert b \lVert$, the least squares problem is to minimize $$ \Big\lVert H_n y - \lVert b \rVert e_0 \Big\rVert . $$ The solution of this least squares problem is achieved by incrementally updating a $QR$ factorization of $H_n$.
Notes
-ksp_gmres_restart
.-ksp_gmres_modifiedgramschmidt
can be used when you suspect that classical Gram-Schmidt may be causing instability.If $A$ is symmetric, then $H = Q^T A Q$ is also symmetric. Since $H$ is Hessenberg, this means $H$ is tridiagonal. Instead of storing $Q_n$, it is sufficient to store only the last two columns since the iteration satisfies a 3-term recurrence. The analog of GMRES for the symmetric case is called MINRES and is also useful for solving symmetric indefinite problems.
Instead of minimizing the $A^T A$ norm of the error, the Conjugate Gradient method minimizes the $A$ norm of the error. For $A$ to induce a norm, it must be symmetric positive definite. Jeremy Shewchuck's guide to CG is an excellent resource.
We have discussed the Jacobi preconditioner $$ P_{\text{Jacobi}}^{-1} = D^{-1} $$ where $D$ is the diagonal of $A$. Gauss-Seidel is $$ P_{GS}^{-1} = (L+D)^{-1} $$ where $L$ is the (strictly) lower triangular part of $A$. The upper triangular part may be used instead, or a symmetric form $$ P_{SGS}^{-1} = (L+U)^{-1} A \Big( I - (L+D)^{-1} \Big) . $$
Given a linear operator $A : V \to V$, suppose we have a collection of prolongation operators $P_i : V_i \to V$. The columns of $P_i$ are "basis functions" for the subspace $V_i$. The Galerkin operator $A_i = P_i^T A P_i$ is the action of the original operator $A$ in the subspace.
Define the subspace projection
$$ S_i = P_i A_i^{-1} P_i^T A . $$These projections may be applied additively
$$ I - \sum_{i=0}^n S_i, $$multiplicatively
$$ \prod_{i=0}^n (I - S_i), $$or in some hybrid manner, such as
$$ (I - S_0) (I - \sum_{i=1}^n S_i) . $$In each case above, the action is expressed in terms of the error iteration operator.
The formal convergence is beyond the scope of this course, but the following estimates are useful. We let $h$ be the element diameter, $H$ be the subdomain diameter, and $\delta$ be the overlap, each normalized such that the global domain diameter is 1. We express the convergence in terms of the condition number $\kappa$ for the preconditioned operator.
In [ ]: