Hyperbolic PDEs

Most formulations of the Einstein equations for the spacetime (with $c=1$) look roughly like wave equations

$$ \frac{\partial^2 \phi}{\partial t^2} = \nabla^2 \phi. $$

We will focus on the simple $1+1$d case

$$ \frac{\partial^2 \phi}{\partial t^2} = \frac{\partial^2 \phi}{\partial x^2}. $$

For numerical evolution we either write this as first order in time,

$$ \frac{\partial}{\partial t} \begin{pmatrix} \phi \\ \phi_t \end{pmatrix} = \begin{pmatrix} \phi_t \\ 0 \end{pmatrix} + \frac{\partial^2}{\partial x^2} \begin{pmatrix} 0 \\ \phi \end{pmatrix}, $$

or as first order in time and space

$$ \frac{\partial}{\partial t} \begin{pmatrix} \phi \\ \phi_t \\ \phi_x \end{pmatrix} = \begin{pmatrix} \phi_t \\ 0 \\ 0 \end{pmatrix} + \frac{\partial}{\partial x} \begin{pmatrix} 0 \\ \phi_x \\ \phi_t \end{pmatrix}. $$

Write will focus on the first order form, written as

$$ \partial_t {\bf u} = {\bf s} + \partial_x {\bf f}({\bf u}). $$

Method of Lines

We have already used our finite difference approximations to replace partial derivatives (usually in space) with discrete approximations. We could use finite difference approximations directly here, replacing both time and space derivatives. However, an alternative approach is standard in Numerical Relativity.

First we put down a grid in space $\{ x_i \}$ for which we have values $\{ {\bf u}_i \}$. All these values can be thought of as one large vector ${\bf U}$. Now, on this grid we can use our finite difference approximation to replace the partial derivatives in space, which we write in discrete operator form as

$$ \partial_x {\bf f}({\bf u}) \to L({\bf U}). $$

The operator takes the discrete values $\{ {\bf u}_i \}$ and combines them, using the finite differencing formulas, to approximate the partial derivative required.

This means we have converted the original partial differential equation to the ordinary differential equation

$$ \frac{d}{d t} {\bf U} = {\bf s}({\bf U}) + L({\bf U}). $$

We can then solve this ODE as an initial value problem, exactly as we did for the horizon finder case.

(Dis)advantages

The Method of Lines (MoL) allows for minimally-coupled physical systems, such as GRMHD, to be split up into multiple pieces, which can be more easily tested, with a broader variety of numerical methods applied, and more straightforwardly have their stability checked.

However, it is typical that numerical methods that use MoL cannot easily take advantage of all the physical information in the system, may require smaller timesteps, may be less efficient, and may have less accuracy. Before worrying about this too much, check whether your time (in implementing a more efficient method) is worth less than the computer's (which will do the extra computation).

Implementation


In [ ]:
import numpy
from matplotlib import pyplot
%matplotlib notebook

We start by implementing the right-hand-side of the evolution: the source term, and the term corresponding to the partial derivative in space:


In [ ]:
def RHS(U, dx):
    """
    RHS term.
    
    Parameters
    ----------
    
    U : array
        contains [phi, phi_t, phi_x] at each point
    dx : double
        grid spacing
        
    Returns
    -------
    
    dUdt : array
        contains the required time derivatives
    """
    
    phi = U[0, :]
    phi_t = U[1, :]
    phi_x = U[2, :]
    
    dUdt = numpy.zeros_like(U)
    
    dUdt[0, :] = phi_t
    dUdt[1, 1:-1] = 1.0 / (2.0*dx)*(phi_x[2:] - phi_x[:-2])
    dUdt[2, 1:-1] = 1.0 / (2.0*dx)*(phi_t[2:] - phi_t[:-2])
    
    return dUdt

We see that this doesn't give us the update term at the edges of the domain. We'll enforce that the domain is periodic as a simple boundary condition. Usually this would be an outgoing wave type boundary condition, but anything that fixes the update term at the boundary is fine.


In [ ]:
def apply_boundaries(dUdt):
    """
    Periodic boundaries
    """
    
    dUdt[:, 0] = dUdt[:, -2]
    dUdt[:, -1] = dUdt[:, 1]
    
    return dUdt

Then we fix the grid. To work with the periodic domain we need to stagger the grid away from the boundaries. We'll fix the domain to be $x \in [-1, 1]$:


In [ ]:
def grid(Npoints):
    """
    Npoints is the number of interior points
    """
    
    dx = 2.0 / Npoints
    return dx, numpy.linspace(-1.0-dx/2.0, 1.0+dx/2.0, Npoints+2)

We take, with some changes in notation, the RK2 method from earlier. This will take only one step.


In [ ]:
def RK2_step(U, RHS, apply_boundaries, dt, dx):
    """
    RK2 method
    """

    rhs = RHS(U, dx)
    rhs = apply_boundaries(rhs)
    Up = U + dt * rhs
    rhs_p = RHS(Up, dx)
    rhs_p = apply_boundaries(rhs_p)
    Unew = 0.5 * (U + Up + dt * rhs_p)
        
    return Unew

There are only two things we need to fix. One is the timestep. For now, we'll set it to $\Delta t = \Delta x / 4$. The second is the initial data. We will choose the initial data to be a time symmetric gaussian,

$$ \phi(0, x) = \exp \left( -20 x^2 \right), \qquad \partial_t \phi (0, x) = 0, $$

which implies

$$ \partial_x \phi(0, x) = -40 x \exp \left( -20 x^2 \right). $$

In [ ]:
def initial_data(x):
    """
    Set the initial data. x are the coordinates. U (phi, phi_t, phi_x) are the variables.
    """
    
    U = numpy.zeros((3, len(x)))
    U[0, :] = numpy.exp(-20.0 * x**2)
    U[2, :] = -40.0*x*numpy.exp(-20.0 * x**2)
    
    return U

Now we can evolve:


In [ ]:
Npoints = 50
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(1.0 / dt)
for n in range(Nsteps):
    U = RK2_step(U, RHS, apply_boundaries, dt, dx)

In [ ]:
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=1$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()

We can see the expected behaviour: the initial data splits into two pulses that propagate in opposite directions. With periodic boundary conditions, we can evolve to $t=2$ and we should get the initial data back again:


In [ ]:
Npoints = 50
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0 / dt)
for n in range(Nsteps):
    U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=2$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()

Animations

These are only to be done if there's enough time. Ensure that ffmpeg is installed first.


In [ ]:
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.html'] = 'html5'

In [ ]:
Npoints = 50
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0 / dt)
Uframes = numpy.zeros((Nsteps+1,3,Npoints+2))
Uframes[0,:,:] = U0
for n in range(Nsteps):
    U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    Uframes[n+1,:,:] = U
    
fig = pyplot.figure(figsize=(8,5))
ax = pyplot.axes(xlim=(-1,1),ylim=(-0.2, 1.2))
line, = ax.plot([], [])
pyplot.close()

def init():
    ax.set_xlabel(r"$x$")
    ax.set_ylabel(r"$\phi$")

def update(i):
    line.set_data(x, Uframes[i, 0, :])
    return line

animation.FuncAnimation(fig, update, init_func=init, frames=Nsteps, interval=100, blit=True)

Convergence

We can now simply check convergence, by taking the norm of the difference between the initial solution and the solution at $t=2$:


In [ ]:
def error_norms(U, U_initial):
    """
    Error norms (1, 2, infinity)
    """
    
    N = len(U)
    error_1 = numpy.sum(numpy.abs(U - U_initial))/N
    error_2 = numpy.sqrt(numpy.sum((U - U_initial)**2)/N)
    error_inf = numpy.max(numpy.abs(U - U_initial))
    
    return error_1, error_2, error_inf

In [ ]:
Npoints_all = 50 * 2**(numpy.arange(0, 6))

dxs = numpy.zeros((len(Npoints_all,)))
wave_errors = numpy.zeros((3, len(Npoints_all)))

for i, Npoints in enumerate(Npoints_all):
    dx, x = grid(Npoints)
    dt = dx / 4
    U0 = initial_data(x)
    U = initial_data(x)
    Nsteps = int(2.0 / dt)
    for n in range(Nsteps):
        U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    
    dxs[i] = dx
    wave_errors[:, i] = error_norms(U[0, :], U0[0, :])

In [ ]:
pyplot.figure()
pyplot.loglog(dxs, wave_errors[0, :], 'bx', label=r"${\cal E}_1$")
pyplot.loglog(dxs, wave_errors[1, :], 'go', label=r"${\cal E}_2$")
pyplot.loglog(dxs, wave_errors[2, :], 'r+', label=r"${\cal E}_{\infty}$")
pyplot.loglog(dxs, wave_errors[1, 0]*(dxs/dxs[0])**4, 'k-', label=r"$p=4$")
pyplot.xlabel(r"$\Delta x$")
pyplot.ylabel("Error norm")
pyplot.legend(loc="lower right")
pyplot.show()

This fourth order convergence is an artefact of the initial data and boundary conditions, which are perfectly symmetric. If we change the initial data to make it asymmetric, we'll get something much closer to second order:


In [ ]:
def initial_data_asymmetric(x):
    """
    Set the initial data. x are the coordinates. U (phi, phi_t, phi_x) are the variables.
    """
    
    U = numpy.zeros((3, len(x)))
    U[0, :] = numpy.sin(numpy.pi*x)*(1-x)**2*(1+x)**3
    U[2, :] = numpy.pi*numpy.cos(numpy.pi*x)*(1-x)**2*(1+x)**3 + numpy.sin(numpy.pi*x)*(2.0*(1-x)*(1+x)**3 + 3.0*(1-x)**2*(1+x)**2)
    
    return U

In [ ]:
Npoints = 50
dx, x = grid(Npoints)
dt = dx / 4
U0 = initial_data_asymmetric(x)
U = initial_data_asymmetric(x)
Nsteps = int(2.0 / dt)
Uframes = numpy.zeros((Nsteps+1,3,Npoints+2))
Uframes[0,:,:] = U0
for n in range(Nsteps):
    U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    Uframes[n+1,:,:] = U
    
fig = pyplot.figure(figsize=(8,5))
ax = pyplot.axes(xlim=(-1,1),ylim=(-3, 3))
line, = ax.plot([], [])
pyplot.close()

def init():
    ax.set_xlabel(r"$x$")
    ax.set_ylabel(r"$\phi$")

def update(i):
    line.set_data(x, Uframes[i, 0, :])
    return line

animation.FuncAnimation(fig, update, init_func=init, frames=Nsteps, interval=100, blit=True)

In [ ]:
Npoints_all = 50 * 2**(numpy.arange(0, 6))

dxs = numpy.zeros((len(Npoints_all,)))
wave_errors = numpy.zeros((3, len(Npoints_all)))

for i, Npoints in enumerate(Npoints_all):
    dx, x = grid(Npoints)
    dt = dx / 4
    U0 = initial_data_asymmetric(x)
    U = initial_data_asymmetric(x)
    Nsteps = int(2.0 / dt)
    for n in range(Nsteps):
        U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    
    dxs[i] = dx
    wave_errors[:, i] = error_norms(U[0, :], U0[0, :])

In [ ]:
pyplot.figure()
pyplot.loglog(dxs, wave_errors[0, :], 'bx', label=r"${\cal E}_1$")
pyplot.loglog(dxs, wave_errors[1, :], 'go', label=r"${\cal E}_2$")
pyplot.loglog(dxs, wave_errors[2, :], 'r+', label=r"${\cal E}_{\infty}$")
pyplot.loglog(dxs, wave_errors[1, 0]*(dxs/dxs[0])**2, 'k-', label=r"$p=2$")
pyplot.xlabel(r"$\Delta x$")
pyplot.ylabel("Error norm")
pyplot.legend(loc="lower right")
pyplot.show()

Courant limits

We restricted the timestep to $\Delta t = \sigma \Delta x$ with $\sigma$, the Courant number, being $1/4$. As the number of timesteps we take is inversely related to the Courant number, we want to make it as large as possible.

Let's try the evolution with Courant number $\sigma=1$:


In [ ]:
Npoints = 50
dx, x = grid(Npoints)
dt = dx
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0/dt)
for n in range(Nsteps):
    U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=2$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()

The result doesn't look too bad, but the numerical approximation is actually bigger than the correct solution. What happens as we increase resolution?


In [ ]:
Npoints = 200
dx, x = grid(Npoints)
dt = dx
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0/dt)
for n in range(Nsteps):
    U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=2$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()

The bulk of the solution looks good, but there's small oscillations at the edges. Increase resolution a bit further:


In [ ]:
Npoints = 400
dx, x = grid(Npoints)
dt = dx
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0/dt)
for n in range(Nsteps):
    U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    
pyplot.figure()
pyplot.plot(x, U0[0, :], 'b--', label="Initial data")
pyplot.plot(x, U[0, :], 'k-', label=r"$t=2$")
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$\phi$")
pyplot.xlim(-1, 1)
pyplot.legend()
pyplot.show()

The result has blown up. We won't be seeing any convergence in this case.

For hyperbolic PDEs there is a Courant limit: a maximum timestep that is consistent with stability. This depends on the physics and the numerical method chosen. Typically a maximum limit is

$$ \sigma < \frac{1}{\sqrt{D} \lambda_{\text{max}}} $$

where $D$ is the number of spatial dimensions and $\lambda_{\text{max}}$ the maximum speed of information propagation (ie, the speed of light).

Animations


In [ ]:
Npoints = 50
dx, x = grid(Npoints)
dt = dx
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0 / dt)
Uframes = numpy.zeros((Nsteps+1,3,Npoints+2))
Uframes[0,:,:] = U0
for n in range(Nsteps):
    U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    Uframes[n+1,:,:] = U
    
fig = pyplot.figure(figsize=(8,5))
ax = pyplot.axes(xlim=(-1,1),ylim=(-0.2, 1.2))
line, = ax.plot([], [])
pyplot.close()

def init():
    ax.set_xlabel(r"$x$")
    ax.set_ylabel(r"$\phi$")

def update(i):
    line.set_data(x, Uframes[i, 0, :])
    return line

animation.FuncAnimation(fig, update, init_func=init, frames=Nsteps, interval=100, blit=True)

In [ ]:
Npoints = 200
dx, x = grid(Npoints)
dt = dx
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0 / dt)
Uframes = numpy.zeros((Nsteps+1,3,Npoints+2))
Uframes[0,:,:] = U0
for n in range(Nsteps):
    U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    Uframes[n+1,:,:] = U
    
fig = pyplot.figure(figsize=(8,5))
ax = pyplot.axes(xlim=(-1,1),ylim=(-0.2, 1.2))
line, = ax.plot([], [])
pyplot.close()

def init():
    ax.set_xlabel(r"$x$")
    ax.set_ylabel(r"$\phi$")

def update(i):
    line.set_data(x, Uframes[i, 0, :])
    return line

animation.FuncAnimation(fig, update, init_func=init, frames=Nsteps, interval=100, blit=True)

In [ ]:
Npoints = 400
dx, x = grid(Npoints)
dt = dx
U0 = initial_data(x)
U = initial_data(x)
Nsteps = int(2.0 / dt)
Uframes = numpy.zeros((Nsteps+1,3,Npoints+2))
Uframes[0,:,:] = U0
for n in range(Nsteps):
    U = RK2_step(U, RHS, apply_boundaries, dt, dx)
    Uframes[n+1,:,:] = U
    
fig = pyplot.figure(figsize=(8,5))
ax = pyplot.axes(xlim=(-1,1),ylim=(-0.2, 1.2))
line, = ax.plot([], [])
pyplot.close()

def init():
    ax.set_xlabel(r"$x$")
    ax.set_ylabel(r"$\phi$")

def update(i):
    line.set_data(x, Uframes[i, 0, :])
    return line

animation.FuncAnimation(fig, update, init_func=init, frames=Nsteps, interval=100, blit=True)

Extension exercises

Second order in space

Implement a MoL solution to the wave equation using the second order in space form

$$ \frac{\partial}{\partial t} \begin{pmatrix} \phi \\ \phi_t \end{pmatrix} = \begin{pmatrix} \phi_t \\ 0 \end{pmatrix} + \frac{\partial^2}{\partial x^2} \begin{pmatrix} 0 \\ \phi \end{pmatrix}. $$

This is more closely related to the BSSNOK formulation. Check convergence on the cases above. Compare the accuracy and efficiency of the approaches.

Fourth order differencing

Implement fourth order spatial differencing. You will need to change the boundary conditions - think how this should be done. Compare the results, and check the convergence rate. Should you expect fourth order convergence?

Third order in time

If you find that the method for time integration is a limitation, try implementing the third order Runge-Kutta method

\begin{align} {\bf U}^{(p_1)} &= {\bf U}^{(n)} + \Delta t \, {\bf F} \left( {\bf U}^{(n)}, t^n \right), \\ {\bf U}^{(p_2)} &= \frac{1}{4} \left( 3 {\bf U}^{(n)} + {\bf U}^{(p_1)} + \Delta t \, {\bf F} \left( {\bf U}^{(p_1)}, t^{n+1} \right) \right), \\ {\bf U}^{(n+1)} &= \frac{1}{3} \left( {\bf U}^{(n)} + 2 {\bf U}^{(p_2)} + 2 \Delta t \, {\bf F} \left( {\bf U}^{(p_2)}, t^{n+1} \right) \right). \end{align}

Compare with both second and fourth order central spatial differencing.