Lecture 12. Domain decomposition

Literature

This lecture is based on the book by Yu. Vasillesvki & M. Olshanskii Краткий курс по многосеточным методам и методам декомпозиции области.

A reminder

Recall complexity to solve $N\times N$ linear system from the Poisson equation:

  • Sparse LU $\mathcal{O}(N^{3/2})$
  • Direct FFT solver $\mathcal{O}(N \log N)$ (works only in rectangular domains)
  • Iterative methods $\mathcal{O}(Nk)$, $k$ is number of iterations
    • $k=\mathcal{O}(N^2)$ - Jacobi or Gauss-Seidel
    • $k = \mathcal{O}(N)$ - Kyrlov subspace methods (CG, GMRES)
  • Multigrid method $\mathcal{O}(N)$. Optimal complexity, but with large constant in $\mathcal{O}(\cdot)$

Domain decomposition: motivation

  • The domains can be simpler
  • In 3D complexity to solve the Poisson equation is $CN^2$. So, if we split domain into 2 parts, complexity is $$ C(N/2)^2 + C(N/2)^2 = CN^2/2 < CN^2 $$
  • Some domain decomposition methods can be parallelized since equation can be solved on multiple small domains simultaneously
  • Some physical problems are described by different PDEs in adjecent domains, so they communicate with each other via the boundary

Domain decomposition: idea

Given a domain $\Omega$. We represent it is as a union of two domain $\Omega_1$ and $\Omega_2$ with intersection $\Gamma$.

Then, the problem

$$ \Delta u = f, \quad u_{\partial \Omega} = g $$

is equivalent to solving two equations

1) $ \Delta u_1 = f_1, \quad u_1|_{\partial \Omega_1} = g|_{ \partial \Omega_1} $

2) $ \Delta u_2 = f_2, \quad u_2|_{\partial \Omega_2} = g|_{ \partial \Omega_2} $

with additional constraints

$$u_1 \mid_{\Gamma} = u_2 \mid_\Gamma,$$

and also continuity of the solution on the intersection $\Gamma$:

$$ \left.\frac{\partial u_1}{\partial n} \right|_{\Gamma} = \left.\frac{\partial u_2}{\partial n} \right|_{\Gamma}.$$

Can be formally written as an equation on the interface with unknown $\lambda = u \mid_{\Gamma}$: $$ S \lambda = \chi, $$ which is called Poincare-Steklov equation.

The "simplest" iterative method: differential form

The simplest iterative method for the domain decomposition is the so-called Neumann-Dirichlet iterative method.

We successively solve the subproblems: $$-\Delta u^{(k+1)}_1 = f, \quad \mbox{in } \Omega_1, \quad u^{(k+1)}_1 = 0, \quad \mbox{in } \partial \Omega_1 \setminus \Gamma, \quad u^{(k+1)}_1 = \lambda^k \quad \mbox{in } \Gamma.$$

$$ -\Delta u^{(k+1)}_2 = f, \quad \mbox{in } \Omega_2, \quad u^{(k+1)}_1 = 0, \quad \mbox{in } \partial \Omega_2 \setminus \Gamma, \quad \frac{\partial u^{(k+1)}_2}{\partial n_2} = -\frac{\partial u^{(k+1)}_1}{\partial n_1}, \quad \mbox{in } \Gamma. $$

And then $$ \lambda^{k+1} = \theta u_2^{(k+1)}\mid_{\Gamma} + (1-\theta) \lambda_k $$

Algebraic formulation

Domain decomposition methods are understood in the best way when treated algebraically.

In the matrix form, the system has the form

$$ \begin{bmatrix} A_{11} & A_{1 \Gamma} & 0 \\ A_{\Gamma 1} & A_{\Gamma} & A_{\Gamma 2} \\ 0 & A_{2 \Gamma} & A_{22} \end{bmatrix}\begin{bmatrix} u_1 \\ u_{\Gamma} \\ u_2 \end{bmatrix} = \begin{bmatrix} f_1 \\ f_{\Gamma} \\ f_2 \end{bmatrix}. $$

In FEM, the interface matrix $A_{\Gamma}$ is formed from the contributions from two domains:

$$A_{\Gamma} = A^{(1)}_{\Gamma} + A^{(2)}_{\Gamma}.$$

Algebraic Poincare-Steklov equation

If we eliminate $u_1$ and $u_2$ using standard block Gaussian elimination, we have the following equation on the interface:

$$\left(A_{\Gamma} - A_{\Gamma 1} A_{11}^{-1} A_{1 \Gamma} - A_{\Gamma 2} A_{22}^{-1} A_{2 \Gamma}\right) u_{\Gamma} = f_{\Gamma} - A_{\Gamma} A^{-1}_{11} f_1 - A_{\Gamma 2} A^{-1}_{22} f_2. $$

The Schur complement in the continuous form is the Poincare-Steklov operator.

Rewriting the PS equation

We can write it down as

$$S \lambda = \chi, \quad S = S_1 + S_2, \quad \chi = \chi_1 + \chi_2, \quad S_i = A^{(i)}_{\Gamma} - A_{\Gamma i} A_{ii}^{-1} A_{i \Gamma}, \quad \chi_i = f^{(i)}_{\Gamma} - A_{\Gamma i} A^{-1}_{ii} f_i.$$

The Neumann-Dirichlet iteration is the simple iteration (Richardson method) applied to the Schur complement:

$$\lambda^{k+1} = \lambda^k + \theta S^{-1}_2 (- S \lambda^k + \chi).$$

In order to multiply $S^{-1}_2$ by vector, it is sufficient to solve linear system

$$ \begin{bmatrix} A^{(2)}_{\Gamma} & A_{\Gamma 2} \\ A_{2 \Gamma} & A_{22} \end{bmatrix} \begin{bmatrix} \eta_{\Gamma} \\ \eta_2 \end{bmatrix} = \begin{bmatrix} \chi_{\Gamma} \\ 0 \end{bmatrix}. $$

And that is a Laplace equation with Neumann boundary condition!

Computational summary

Solving

$$S u_{\gamma} = (S_1 + S_2) u_{\gamma} = \chi$$

by iterative method.

Multiplication by $S_1$ or $S_2$ requires solutions of linear systems with $A_{11}$ and $A_{22}$ (Dirichlet solvers).

The solvers with $S_1$ and $S_2$ require the solution of Neumann problems.

Multiple domain case

In case of multiple domains, similar construct can be written down, especially for the case of strips.

The Schur complement will be the sum of $S_i$, where $S_i$ is the interface matrix of the form

$$S_i = N_i (A^{(i)}_{\Gamma} - A_{\Gamma i} A_{ii}^{-1} A_{i \Gamma}) N^{\top}_i.$$

Other ways to precondition the interface equation: non-symmetric Neumann-Dirichlet

Consider the matrix $B_u$ of the form

$$B_u = \begin{bmatrix} A_{11} & A_{1 \Gamma} & 0 \\ A_{\Gamma 1} & A^{(1)}_{\Gamma} & 0 \\ 0 & A_{2 \Gamma} & A_{22} \end{bmatrix} $$

This is a non-symmetric matrix, however on the specific subspace

$$U = \{ A_{2 \Gamma} x_{\Gamma} + A_{22} x_2 = 0 \}$$

it is a symmetric-positive definite and spectrally equivalent to $A$.

Therefore, if at each iteration step the error belongs to the subspace $U$ (which is the case), then $B_u$ can be an efficient preconditioner.

Symmetric Neumann-Dirichlet

The symmetric variant has the form

$$B_s = \begin{bmatrix} A_{11} & A_{1 \Gamma} & 0 \\ A_{\Gamma 1} & A^{(1)}_{\Gamma}+ A_{\Gamma 2} A^{-1}_{22} A_{2 \Gamma} & A_{\Gamma 2} \\ 0 & A_{2 \Gamma} & A_{22} \end{bmatrix} $$

The solution of a linear system with the matrix $B_s$ requires the solution of linear systems with matrices $A_{22}$, $B_1$, $A_{22}$.

Moreover, $B_s$ is spectrally equivalent to the matrix $A$, if $A = A^{\top}$.

i.e.

$$C_{21} A \leq B_s \leq A.$$

The main disadvantage of $B_s$ is that the subdomain problems have to be solved exactly.

Neumann-Neumann iterative

For solving a linear system of the form

$$S u_{\Gamma} = \chi, \quad S = S_1 + S_2,$$

we can use $\frac{1}{2} S^{-1}_1 + \frac{1}{2} S^{-1}_2$ as a preconditioner.

Recall that $S^{-1}_1$ and $S^{-1}_2$ can be computed by solving Neumann boundary problems.

Multiple domain case

Neumann-Neumann method can be generalized to the multiple-domain case.

The preconditioner has the form

$$B_s = \sum_{i=1}^m N_i D_i S^{-1}_i D_i N^{\top}_i,$$

where $D_i$ is a diagonal matrix that contains the inverse numbers of the domains, containing the particular node, $N_i$ is the local assembly matrix

(i.e. $\sum_{i=1}^m N_i D_i N^{\top}_i = I.$).

The convergence estimate by Mandel given

$$\mathrm{cond}(B_S S) \leq C\left(1 + \log \frac{d}{h} \right)\max \left\{\frac{C}{d^2}; \log \frac{d}{h} \right\}.$$

In practice, the dependence on the number of subdomains is smaller, and can be removed by solving an additional problem on the grid that is created by the subdomain subdivision.

Overlapping domain decomposition (Schwartz method)

Much more robust (and scalable) variant can be obtained by splitting the domain into the overlapping patches:

$$\Omega = \bigcup^m_{i=1} \Omega_i,$$

in such a way that for every $\Omega_i$ there exist $js$ such that

$$\Omega_i \cap \Omega_j \ne 0.$$

Schwartz method

We solve

$$-\Delta u^{n + \frac{i}{m}} = f \quad \mbox{in } \Omega_i,$$$$u^{n + \frac{i}{m}} = u^{n + \frac{i-1}{m}}, \quad \mbox{in } \overline{\Omega} \setminus \Omega_i \quad i = 1, \ldots, m.$$

Multiplicative Schwartz method

Let $R_i$ be the projection matrix onto the indices, corresponding to the domain (remember the subspace correction algorithm).

Then,

$$A_i = R_i A R^{\top}_i,$$

and one step can be written as

$$A_i z_i = R_i (f - Au), \quad u := u + R_i z_i, \quad i = 1, \ldots, m.$$

Then the transition matrix has the form

$$e^{n+1} = (I - P_m)\ldots (I - P_1) e^n,$$

where $$P_i = R_i A^{-1}_i R_i^{\top}.$$ The matrix $P_i$ is a projector (let us show that!).

Additive Schwartz method

In the additive Schwartz method the iteration matrix is given as

$$(I - P_m) \ldots (I - P_1) \approx I - P_1 - \ldots - P_m.$$

The main benefit is that it is embarassingly parallel: subproblems are solved independently, then the information through the borders is exchanged.

Additive Schwartz method & Block Jacobi

Additive Schwartz method for non-overlapping case is just the Block Jacobi method:

$$ u^{n+1} = u^n + R^\top_1 A_{11}^{-1} R_1(f-Au^n) + R^\top_2 A_{22}^{-1} R_2(f-Au^n) = u^n + \begin{bmatrix}A_{11} & 0 \\ 0 & A_{22}\end{bmatrix}^{-1} (f - A u^n) $$

Multiplicative Schwartz & Block Gauss-Seidel

One iteration of the multiplicative Schwartz is equivalent to the Block Gauss-Seidel method: $$ u^{n+1} = u^n + \begin{bmatrix}A_{11} & 0 \\ A_{12} & A_{22}\end{bmatrix}^{-1} (f - A u^n) $$

Next lecture

  • Start the advanced topics (brief overview of wavelets, sparse grids, etc.)