The simplest integral equation we talked about reads
$$\int_{\partial \Omega} \frac{q(y)}{\Vert x - y \Vert} dy = f(x), \quad x \in \partial \Omega,$$where $\Omega$ is a certain domain in 3D.
The general form of a first-kind integral equation is
$$ \int_{\partial \Omega} q(y) G(x, y) dy = f(x), \quad x \in \partial \Omega, $$where $G(x, y)$ is a Green function of the PDE.
Suppose we want to find a function $G(x, y)$ such that
$$\Delta_x G(x, y) = \delta (x - y), \; G(x, y) \mid_{x \in \partial \Omega} = 0. $$If we found such a function, solution of
$$\Delta u = f,$$is given by the explicit integral
$$u(x) = \int_{\Omega} G(x, y) f(y) dy.$$In a few cases, you can write down such a Green function, typically based on the method of reflections.
For a Dirictlet bouundary conditions, we can use not only charge located at $y$, but also many other charges that lie outside $\Omega$ and give the required b.c.
Now to discretization of the equation $$\int_{\partial \Omega} \frac{q(y)}{\Vert x - y \Vert} dy = f(x), \quad x \in \partial \Omega.$$
Steps of discretization:
Note, that the basis functions can be even piecewise-constant, compared to PDEs, where generally they have to be at least piecewise-linear for Finite Element Method (they can be piecewise-constant for Discontinious Galerkin, but that is more an exception)
Suppose that $\mathrm{supp}(\phi_i) = T_i$ (i.e., local support). Then,
$$ \sum_{i=1}^N q_i \int_{T_i} \frac{\phi_i(y)}{\Vert x - y \Vert} dy = f(x).$$This has to hold for all $x$, i.e. it is infinite amount of equations. There are two ways of dealing with this situation:
Elements $T_i$ are elements of mesh $\mathcal{T}$. Usually they are triangles, but other configuration cam be used.
The Galerkin method takes the equation $$ \sum_{i=1}^N q_i \int_{T_i} \frac{\phi_i(y)}{\Vert x - y \Vert} dy = f(x),$$ and multiplies it by $\phi_j(x)$ and integrates over $T_j$. Then we get
$$ \sum_{i=1}^N q_i \int_{T_j} \int_{T_i} \frac{\phi_j(x) \phi_i(y)}{\Vert x - y \Vert} dx dy = \int_{T_j} f(x) \phi_j(x) dx. $$This is a square system of linear equations of the form
$$
A q = f,
$$
where
$$
A = [a_{ji}], \; a_{ji} = \int_{T_j} \int_{T_i} \frac{\phi_j(x) \phi_i(y)}{\Vert x - y \Vert} dx dy, \quad
f_j = \int_{T_j} f(x) \phi_j(x) dx
$$
Everything is fine, the matrix is symmetric (it can be also shown that it is positive definite, let us do that), but the evaluation of the element requires
4D integral to be evaluated. That is main computational problem, thus engineers often use a much more simple collocation method.
Collocation method consists of taking the equation $$ \sum_{i=1}^N q_i \int_{T_i} \frac{\phi_i(y)}{\Vert x - y \Vert} dy = f(x),$$ at some collocation points $x_j$.
Consider piecewise-constant case, i.e. $\phi_i(y) = 1$ on $T_i$ and $0$ otherwise. Then the most natural choice for collocation points are centers of the triangles, and we again get a square linear system
$$A q = f,$$where
$$ A = [a_{ji}], \; a_{ji} = \int_{T_i} \frac{dy}{\Vert x_j - y \Vert}, \quad f_j = f(x_j) $$i.e. it has the meaning of uniformly charged triangle.
Note that this matrix is non-symmetric, so you can not use symmetric-oriented methods to solve linear system, but integrals are in 2D domain.
Also if number of collocation points is more than number of triangles $T_i$, then we have to solve least squares problem, because linear system becomes rectangular and underdetermined.
The Nystrom method is just another simplication of the collocation method (as collocation method can be considered as a quadrature applied to the Galerkin method).
We just approximate
$$ A_{ij} = \int_{T_i} \frac{dy}{\Vert x_j - y \Vert} dy \approx S_i \frac{1}{\|x_j - y_i\|}. $$If the source triangle $T_i$ is sufficiently far from the receiver $x_j$, indeed, this can be a very good approximation.
If $y_i$ lies inside the triangle, then the approximation can be really bad.
Typically, a locally corrected Nystrom is used: much more advanced quadratures are used in the close zone, where the singularity of the integrals appears. The concept is as follows:
Far-close splitting is the key to fast methods!
The main art about Galerkin/collocation methods is computation of singular integrals, which can not be handled by standard Gauss quadratures in the close zone.
There are several ways
We can actually compute the anti-derivative of the integral in question, by taking successive integrals
$$ \frac{1}{\sqrt{x^2 + y^2}} $$\begin{equation} \begin{split} F(x, y) = \int \int \frac{1}{\sqrt{x^2 + y^2}} dx dy = \\ -y + y \log(x + \sqrt{x^2 + y^2}) + x \log(y + \sqrt{x^2 + y^2}), \end{split} \end{equation}and the final integral is just $$ I(x_0, y_0, x_1, y_1) = F(x_0, y_0) + F(x_1, y_1) - F(x_0, y_1) - F(x_1, y_0), $$
(generalization of the Newton-Leibniz formula)
You can be a little bit smarter: approximate $1/r$ by something that can be well-integrated.
By a sum of separable functions!
$$\frac{1}{\Vert x - y\Vert } \approx \sum_{\alpha=1}^r u_{\alpha}(x) u_{\alpha}(y)$$We have used the symmetry.
Such representation is obviously useful, since in Galerkin formula we can replace 4D integration by 2D integration,
$$ \int \int \phi_i(x) \phi_j(y) u_{\alpha}(x) u_{\alpha}(y) dx dy = \int \phi_i(x) u_{\alpha}(x) dx \int \phi_j(y) u_{\alpha}(y) dy. $$But how to compute these functions $u_{\alpha}$?
We reduced the problem to the problem of creating a good quadrature rule for the integral
$$\int^{\infty}_0 \frac{e^{-px}}{\sqrt{p}} dp$$You have to do the $p = e^t$ (why do we do so?) and we have the new integral
$$ \int^{\infty}_{-\infty} e^{-e^{t} x + t/2} dt $$and this integral is then approximated by trapezoidal rule $t_k = a_t + k h_t, \quad h_t = (b_t - a_t)/K$
The integral is doubly exponentially decaying, and also its Fourier transform is decaying exponentially fast.
Thus, trapezoidal rule has exponential convergence in this setting
A short demo...
In [15]:
import numpy as np
import math
import matplotlib.pyplot as plt
%matplotlib inline
a = -30
b = 30
n_list = [10, 100, 150]
plt.figure(figsize=(10,8))
for n in n_list:
t = np.linspace(a, b, n)
h = t[1] - t[0]
w = np.ones(n) * h
w[0] = h/2
w[n-1] = h/2
w = w * np.exp(0.5 * t)/np.sqrt(math.pi)
ps = np.exp(t)
x = np.linspace(0.01, 1)
fun = 1.0/np.sqrt(x)
appr = 0 * x
for i in xrange(n):
appr = appr + w[i] * np.exp(-(x) * ps[i])
plt.plot(x, np.abs(appr - fun), label="$n$ = {}".format(n))
# ax.plot(x, appr, label='appr')
# ax.plot(x, fun, label='fun')
plt.yscale('log')
#ax.set_title('Approximation error by %d Gaussians' % n)
plt.ylabel('Error', fontsize=24)
plt.xlabel("$x$", fontsize=24)
plt.legend(fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.tight_layout()
Exponential sums are well-applicable to the integration over rectangles. But the most typical choice is the integration over triangles.
Consider the standard triangle $$ T = \{ (x, y) \; | \; 0 \leq x \leq 1, \; 0 \leq y \leq x\}. $$
Consider the integral $$\int_{T} h dS,$$
which can be written as
$$ \int^{1}_{0} dx \int^x_0 h(x, y)dy. $$Suppose we have to compute $$f(x) = \int G(x-y) \rho (y) dy, $$ where $G(x-y)$ is a spherically symmetric Green function of some differential operator, that is $$G(r) = G(|r|)$$
Idea: use Fourier transform $$\widehat{(f \ast g)} = {\hat f} {\hat g}$$
This suggests that we can compute $f(x)$ as $$f(x) = \Big(\frac{1}{2\pi}\Big)^3 \int e^{i s \cdot x} \widehat{\rho}(s) \widehat{G}(s) ds$$
However, there is a problem: $\widehat{G}(s)$ is typically very singular, e.g. for Helmholtz differential operator $\Delta + k^2$ $$G(s) = \frac{1}{k^2 - s^2},$$ there is a sphere of singular points. So how we can avoid it?
We can use the following simple observation. Suppose that $\rho(x)$ is $0$ outside of cube $[0, 1]^3$. Then, in this cube we have the following identity:
$$ \int G(x-y) \rho (y) dy \equiv \int G_L(x-y) \rho (y) dy, $$with $$ G_L(r) = G(r) \chi_{[0, L]}(|r|), \quad L = \sqrt{3} $$
and $\chi_D(x)$ is a charactersitic function of a set $D$, and identity holds because in the cube $[0, 1]^3$ distance between any two points doesn't exceed $\sqrt{3}$.
So we can use $G_L$ instead of $G$. $G_L$ has compact support and it's Fourier transform is smooth!
E.g. for Helmholtz differential operator:
In [16]:
import numpy as np
import math
import matplotlib.pyplot as plt
%matplotlib inline
def G_L(x, k, L):
return (-1 + np.exp(1.0j * L * k) * (np.cos(L * x) - 1.0j * k/x * np.sin(L * x)))/(k**2 - x**2)
x = np.linspace(-20., 20., 1000)
plt.plot(x, np.real(G_L(x, 4.0, np.sqrt(3))), label='Re $G_L$')
plt.plot(x, np.imag(G_L(x, 4.0, np.sqrt(3))), label='Im $G_L$')
plt.legend()
Out[16]:
Then we have a linear $n \times m$ system for the weights, which is underdetermined.
Specifically,
$$\sum_{j=1}^m f_i(x_j) \sqrt{w_j} v_j = b_n.$$Since the quadrature formula integrates the products, the matrix has singular values $0$ or $1$ (prove!).
We can also set most of the solution coefficients to zero, and leave only $m$ non-zero entries.
If we denote the positions by $i_1, \ldots, i_n$, the final quadrature will read
$$\int^1_0 f(x) dx \approx \sum_{k=1}^n f(x_{i_k}) z_{k} \sqrt{w_{i_k}}.$$
In [1]:
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Out[1]: