The vector Laplacian is
$$ \nabla^2 \vec{u} = \nabla \cdot \nabla \vec{u} $$A vector identity gives the vector Laplacian as
$$ \nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u} $$We will check if this identity holds for shenfun using both cylindrical and spherical coordinates.
For reference, the vector Laplacian is given here
Cylinder coordinates are mapped to Cartesian through
$$ \begin{align*} x &= r \cos \theta \\ y &= r \sin \theta \\ z &= z \end{align*} $$and we use a domain $(r, \theta, z) \in [0, 1] \times [0, 2 \pi] \times [0, 2 \pi]$.
Spherical coordinates are mapped as
$$ \begin{align*} x &= r \sin(\theta) \cos(\phi)\\ y &= r \sin(\theta) \sin(\phi)\\ z &= r \cos(\theta) \end{align*} $$for a domain $(r, \theta, \phi) \in [0, 1] \times [0, \pi] \times [0, 2 \pi]$.
This is all we need to know for using these coordinate systems with shenfun.
In [1]:
from shenfun import *
from IPython.display import Math
import sympy as sp
r, theta, z = psi = sp.symbols('x,y,z', real=True, positive=True)
rv = (r*sp.cos(theta), r*sp.sin(theta), z)
N = 10
F0 = FunctionSpace(N, 'F', dtype='d')
F1 = FunctionSpace(N, 'F', dtype='D')
L = FunctionSpace(N, 'L', domain=(0, 1))
T = TensorProductSpace(comm, (L, F1, F0), coordinates=(psi, rv))
V = VectorTensorProductSpace(T)
u = TrialFunction(V)
In [2]:
du = div(u)
Math(du.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
Out[2]:
In [3]:
du.tosympy(basis=(r*sp.cos(theta), sp.sin(theta), z), psi=psi)
Out[3]:
The vector Laplacian can now be found as
In [4]:
du = div(grad(u))
We can look at du
using the following
In [5]:
Math((du).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
Out[5]:
Note that the basis vectors $\mathbf{b}_i$ are not unit vectors (i.e., of length 1). For this reason the equation does not look exactly like the one here. The basis vectors are
In [6]:
Math(T.coors.latex_basis_vectors(covariant=True, symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
Out[6]:
Notice that $|\mathbf{b}_{\theta}|=r$. Shenfun uses non-normalized covariant basis vectors for describing all vectors and higher order tensors. The vector components are contraviariant and as such use a superscript $u^{\theta}$ and not subscript $u_{\theta}$. Note that for orthogonal coordinates the scaled unit vectors are the same for either contra- or covariant basis vectors and as such this distinction is not necessary here. The distinction is only required for non-orthogonal coordinate systems. Shenfun can handle both orthogonal and non-orthogonal coordinates, but requires that equations to be solved are separable.
Now check the vector identity
$$ \nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u} $$
In [7]:
dv = grad(div(u)) - curl(curl(u))
dv.simplify()
Math((dv).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
Out[7]:
We see that the order is different, but the vector is actually identical to the previous one (du). To show that they are equal we can subtract one from the other and simplify.
In [8]:
dw = du-dv
dw.simplify()
Math(dw.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', z: 'z'}))
Out[8]:
If you are not convinced we can assemble some matrices and check that du
and dv
behave the same way.
In [9]:
v = TestFunction(V)
A0 = inner(v, du)
A1 = inner(v, dv)
A0
and A1
now contains lists of tensor product matrices, because the vector identities contain a lot of different terms (as we have seen above). To check that A0
and A1
are identical, we test the vector product of the matrices with a random vector. Since we are working with vectors we use a BlockMatrix
for the combined tensor product matrices.
In [10]:
u_hat = Function(V)
u_hat[:] = np.random.random(u_hat.shape) + np.random.random(u_hat.shape)*1j
a0 = BlockMatrix(A0)
a1 = BlockMatrix(A1)
b0 = Function(V)
b1 = Function(V)
b0 = a0.matvec(u_hat, b0)
b1 = a1.matvec(u_hat, b1)
print('Error ', np.linalg.norm(b0-b1))
In [11]:
r, theta, phi = psi = sp.symbols('x,y,z', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
N = 6
F = FunctionSpace(N, 'F', dtype='d')
L0 = FunctionSpace(N, 'L', domain=(0, 1))
L1 = FunctionSpace(N, 'L', domain=(0, np.pi))
T = TensorProductSpace(comm, (L0, L1, F), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))
V = VectorTensorProductSpace(T)
u = TrialFunction(V)
du = div(grad(u))
dv = grad(div(u)) - curl(curl(u))
dv.simplify()
dw = du-dv
dw.simplify()
In [12]:
Math(dw.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'}))
Out[12]:
This proves that for shenfun the vector identity $\nabla^2 \vec{u} = \nabla \nabla \cdot \vec{u} - \nabla \times \nabla \times \vec{u}$ holds true also for spherical coordinates.
In [ ]: