This is a Jupyter notebook using Python. You can install Jupyter locally to edit and interact with this notebook.
Many matrices in applications, particularly the study of physical systems and graphs/networks, have entries that are mostly equal to zero. We can more efficiently store such systems by storing only the nonzero elements. We will discuss storage and optimized implementations later. Many of the methods for sparse systems apply to solving systems with matrices $A$ that can be applied to a vector ($y \gets A x$) in significantly less than $O(n^2)$ complexity, or that are "well-conditioned" such that an iterative method converges in significantly less than $n$ iterations.
PETSc, the Portable Extensible Toolkit for Scientific computing, is an open source software package for the parallel solution of algebraic and differential-algebraic equations. This includes linear algebra, for which PETSc offers a broad variety of implementations. For general information about PETSc, I refer to this primer.
The simplest iterative method is Richardson's method, which solves $A x = b$ by the iteration $$ x_{k+1} = x_k + \omega (b - A x_k) $$ where $\omega > 0$ is a damping parameter and $x_0$ is an initial guess (possibly the zero vector). If $b = A x_*$, this iteration is equivalent to $$ x_{k+1} - x_* = (x_k - x_*) - \omega A (x_k - x_*) = (I - \omega A) (x_k - x_*) .$$ It is convenient for convergence analysis to identify the "error" $e_k = x_k - x_*$, in which this becomes $$ e_{k+1} = (I - \omega A) e_k $$ or $$ e_k = (I - \omega A)^k e_0 $$ in terms of the initial error. Evidently powers of the iteration matrix $I - \omega A$ tell the whole story. Suppose that the eigendecomposition $$ X \Lambda X^{-1} = I - \omega A $$ exists. Then $$ (I - \omega A)^k = (X \Lambda X^{-1})^k = X \Lambda^k X^{-1} $$ and the convergence (or divergence) rate depends only on the largest magnitude eigenvalue. This analysis isn't great for two reasons:
We can repair these weaknesses by using the Schur decomposition $$ Q R Q^h = I - \omega A $$ where $R$ is right-triangular and $Q$ is unitary (i.e., orthogonal if real-valued; $Q^h$ is the Hermitian conjugate of $Q$). The Schur decomposition always exists and $Q$ has a condition number of 1.
Evidently we must find $\omega$ to minimize the maximum eigenvalue of $I - \omega A$. We can do this if $A$ is well conditioned, but not in general.
Preconditioning is the act of creating an "affordable" operation "$P^{-1}$" such that $P^{-1} A$ (or $A P^{-1}$) is is well-conditoned or otherwise has a "nice" spectrum. We then solve the system
$$ P^{-1} A x = P^{-1} b \quad \text{or}\quad A P^{-1} \underbrace{(P x)}_y = b $$in which case the convergence rate depends on the spectrum of the iteration matrix $$ I - \omega P^{-1} A . $$
There are two complementary techniques necessary for efficient iterative methods:
Although there is ongoing research in Krylov methods and they are immensely useful, I would say preconditioning is 90% of the game for practical applications, particularly as a research area.
All matrix iterations work with approximations in a Krylov subspace, which has the form
$$ K_n = \big[ b \big| Ab \big| A^2 b \big| \dotsm \big| A^{n-1} b \big] . $$This matrix is horribly ill-conditioned and cannot stably be computed as written. Instead, we seek an orthogonal basis $Q_n$ that spans the same space as $K_n$. We could write this as a factorization
$$ K_n = Q_n R_n $$where the first column $q_0 = b / \lVert b \rVert$. The $R_n$ is unnecessary and hopelessly ill-conditioned, so a slightly different procedure is used.
The Arnoldi iteration applies orthogonal similarity transformations to reduce $A$ to Hessenberg form, starting from a vector $q_0 = b$,
$$ A = Q H Q^h . $$Let's multiply on the right by $Q$ and examine the first $n$ columns,
$$ A Q_n = Q_{n+1} H_n $$where $H_n$ is an $(n+1) \times n$ Hessenberg matrix.
This representation is well-conditioned because $Q$ is orthogonal and
$$ \lVert H_n \rVert \le \lVert Q_{n+1}^h \rVert \lVert A \rVert \lVert Q_n \rVert \le \lVert A \rVert $$.
For a lower bound, we have
$$ \sigma_{\min}(A)^2 \le x^h A^h A x $$for all $x$ of norm 1. It must also be true for any $x = Q_n y$ where $\lVert y\rVert = 1$, thus
$$ \sigma_{\min}(A)^2 \le y^h Q_n^h A^h A Q_n y = y^h H_n^h Q_{n+1}^h Q_{n+1} H_n y = y^h H_n^h H_n y . $$GMRES (Generalized Minimum Residual) minimizes $$ \lVert A x - b \rVert $$ over the subspace $Q_n$. I.e., $x = Q_n y$ for some $y$. By the recurrence above, this is equivalent to $$ \lVert Q_{n+1} H_n y - b \lVert $$ which can be solved by minimizing $$ \lVert H_n y - Q_{n+1}^h b \rVert . $$ Since $q_0 = b/\lVert b \lVert$, the least squares problem is to minimize $$ \Big\lVert H_n y - \lVert b \rVert e_0 \Big\rVert . $$ The solution of this least squares problem is achieved by incrementally updating a $QR$ factorization of $H_n$.
Notes
-ksp_gmres_restart
.-ksp_gmres_modifiedgramschmidt
can be used when you suspect that classical Gram-Schmidt may be causing instability.If $A$ is symmetric, then $H = Q^T A Q$ is also symmetric. Since $H$ is Hessenberg, this means $H$ is tridiagonal. Instead of storing $Q_n$, it is sufficient to store only the last two columns since the iteration satisfies a 3-term recurrence. The analog of GMRES for the symmetric case is called MINRES and is also useful for solving symmetric indefinite problems.
Instead of minimizing the $A^T A$ norm of the error, the Conjugate Gradient method minimizes the $A$ norm of the error. For $A$ to induce a norm, it must be symmetric positive definite. Jeremy Shewchuck's guide to CG is an excellent resource.
We have discussed the Jacobi preconditioner $$ P_{\text{Jacobi}}^{-1} = D^{-1} $$ where $D$ is the diagonal of $A$. Gauss-Seidel is $$ P_{GS}^{-1} = (L+D)^{-1} $$ where $L$ is the (strictly) lower triangular part of $A$. The upper triangular part may be used instead, or a symmetric form $$ P_{SGS}^{-1} = (L+U)^{-1} A \Big( I - (L+D)^{-1} \Big) . $$
Given a linear operator $A : V \to V$, suppose we have a collection of prolongation operators $P_i : V_i \to V$. The columns of $P_i$ are "basis functions" for the subspace $V_i$. The Galerkin operator $A_i = P^T A P$ is the action of the original operator $A$ in the subspace.
Define the subspace projection
$$ S_i = P_i A_i^{-1} P_i^T A . $$These projections may be applied additively
$$ I - \sum_{i=0}^n S_i, $$multiplicatively
$$ \prod_{i=0}^n (I - S_i), $$or in some hybrid manner, such as
$$ (I - S_0) (I - \sum_{i=1}^n S_i) . $$In each case above, the action is expressed in terms of the error iteration operator.
The formal convergence is beyond the scope of this course, but the following estimates are useful. We let $h$ be the element diameter, $H$ be the subdomain diameter, and $\delta$ be the overlap, each normalized such that the global domain diameter is 1. We express the convergence in terms of the condition number $\kappa$ for the preconditioned operator.
In the above, we have assumed that each of the subdomain problems $A_i = P_i^T A P$ is solved exactly. The complexity of this solve is potentially dominant, so we should understand its cost in terms of the problem size. The standard method for a direct solve is $LU$ (or Cholesky) factorization. For a sparse operator, the complexity depends on the ordering of degrees of freedom.
For a structured grid, the "natural" ordering is the ordering of the original operator.
A sparse direct solve of this system will result in fill up to the bandwidth.
These plots can be produced in PETSc using -mat_view draw
and -pc_type lu -pc_factor_mat_ordering_type natural -mat_factor_view draw
(e.g., with -draw_pause 2
to wait 2 seconds for the viewer).
The Reverse Cuthill-McKee (rcm
) ordering applies a breadth-first search to produce a low-bandwidth ordering.
The nested dissection (nd
) ordering recursively bisects the domain.
The asymptotic costs are different for these approaches.
In [ ]: