In [1]:
from IPython.core.display import HTML
def css_styling():
    styles = open("./styles/custom.css", "r").read()
    return HTML(styles)
css_styling()


Out[1]:

Approximate solutions to the Riemann Problem

Solutions in practice

Solutions to the Riemann problem are mainly used in two contexts:

  1. As reference solutions against which a numerical method is benchmarked, or
  2. As part of a numerical method, such as a high resolution shock capturing method, where the flux between two numerical cells is required.

In the first case, accuracy is paramount and the complete solution (all wave speeds, and all intermediate states) is required. In the second case only one thing is required: the flux ${\bf f}^*$ between the cells, which is the flux on the characteristic line $\xi = x / t = 0$.

In this second case, the numerical method will have to repeatedly solve the Riemann problem. In a general problem, the solution may be needed tens of times per cell, per timestep, leading to millions (or more!) solutions in a simulation. The speed of the solution is then extremely important, and approximate solutions are often used.

Roe-type solutions

The most obvious simplification is to reduce the nonlinear problem

\begin{equation} \partial_t {\bf q} + \partial_x {\bf f}({\bf q}) = {\bf 0} \end{equation}

to the linear problem

\begin{equation} \partial_t {\bf q} + A \partial_x {\bf q} = {\bf 0}, \end{equation}

where $A$ is a constant matrix that approximates the Jacobian $\partial {\bf f} / \partial {\bf q}$. We can then solve the linear problem exactly (e.g. by diagonalising the matrix and solving the resulting uncoupled advection equations), to find

\begin{align} {\bf q}(x, t) & = {\bf q}_l + \sum_{p: \lambda^{(p)} < \tfrac{x}{t}} \left\{ {\bf l}^{(p)} \cdot \left( {\bf q}_r - {\bf q}_l \right) \right\} {\bf r}^{(p)}, \\ & = {\bf q}_r - \sum_{p: \lambda^{(p)} > \tfrac{x}{t}} \left\{ {\bf l}^{(p)} \cdot \left( {\bf q}_r - {\bf q}_l \right) \right\} {\bf r}^{(p)}, \\ & = \frac{1}{2} \left( {\bf q}_l + {\bf q}_r \right) + \sum_{p: \lambda^{(p)} < \tfrac{x}{t}} \left\{ {\bf l}^{(p)} \cdot \left( {\bf q}_r - {\bf q}_l \right) \right\} {\bf r}^{(p)} - \sum_{p: \lambda^{(p)} > \tfrac{x}{t}} \left\{ {\bf l}^{(p)} \cdot \left( {\bf q}_r - {\bf q}_l \right) \right\} {\bf r}^{(p)}. \end{align}

where $\lambda^{(p)}, {\bf r}^{(p)},$ and ${\bf l}^{(p)}$ are the eigenvalues and the (right and left respectively) eigenvectors of $A$, ordered such that $\lambda^{(1)} \le \dots \le \lambda^{(N)}$ as usual. All three solutions are equivalent; the last is typically used.

Given this complete solution, it is easily evaluated along $x = 0$, and the flux calculated from the result.

An even greater shortcut can be found by noting that we are approximating ${\bf f} = A {\bf q}$. Therefore the standard form is to write

\begin{equation} {\bf f}^* = \frac{1}{2} \left( {\bf f}_l + {\bf f}_r \right) + \sum_{p} \left| \lambda^{(p)} \right| \left\{ {\bf l}^{(p)} \cdot \left( {\bf q}_r - {\bf q}_l \right) \right\} {\bf r}^{(p)}, \end{equation}

where now we are summing over all eigenvalues and eigenvectors. It should be noted that ${\bf f}^* \ne {\bf f}({\bf q}^*)$ in general, as the calculation of ${\bf f}^*$ relied on an approximation to the flux.

In order to complete this specification of the solver, we only need to say how $A$ is defined. Roe gave the suggestion that

\begin{equation} A = A({\bf q}_{\textrm{Roe}}) = \left. \frac{\partial {\bf f}}{\partial {\bf q}} \right|_{{\bf q}_{\textrm{Roe}}}, \end{equation}

where the Roe average ${\bf q}_{\textrm{Roe}}$ satisfies

  1. $A({\bf q}_{\textrm{Roe}}) \left( {\bf q}_r - {\bf q}_l \right) = {\bf f}_r - {\bf f}_l$,
  2. $A({\bf q}_{\textrm{Roe}})$ is diagonalizable with real eigenvalues, and
  3. $A({\bf q}_{\textrm{Roe}}) \to \partial {\bf f} / \partial {\bf q}$ smoothly as ${\bf q}_{\textrm{Roe}} \to {\bf q}$.

It is possible to construct the Roe average for many systems (such as the Euler equations, and the relativistic Euler equations). However, a simple arithmetic average is often nearly as good - in the sense that the algorithm will fail only slightly more often than the algorithm with the full Roe average!

The problem with Roe type solvers is that it approximates all waves as discontinuities. This leads to inaccuracies near rarefactions, and these can be catastrophically bad when the rarefaction fan crosses $\xi = 0$ (a sonic rarefaction). It is possible to detect when these problems will occur (e.g. by looking at when $\lambda^{(p)}$ changes sign between the left and right states) and change the approximation at this point, often known as an entropy fix. More systematic and complex methods that extend the Roe approach whilst avoiding this problem include the Marquina solver.

HLL-type solutions

An alternative type of method simplifies the wave structure even more, by simplifying the number of waves. HLL (for Harten, Lax and van Leer) type solutions assume that

  1. there are two waves, both discontinuities, separating a constant central state in the solution, and
  2. the waves propagate at the (known) speeds $\xi_{(\pm)}$.

From these assumptions, and the Rankine-Hugoniot conditions, we have the two equations

\begin{align} \xi_{(-)} \left[ {\bf q}_m - {\bf q}_l \right] & = {\bf f}_m - {\bf f}_l, \\ \xi_{(+)} \left[ {\bf q}_r - {\bf q}_m \right] & = {\bf f}_r - {\bf f}_m. \end{align}

These are immediately solved to give

\begin{align} {\bf q}_m & = \frac{\xi_{(+)} {\bf q}_r - \xi_{(-)} {\bf q}_l - {\bf f}_r + {\bf f}_l}{\xi_{(+)} - \xi_{(-)}}, \\ {\bf f}_m & = \frac{\hat{\xi}_{(+)} {\bf f}_l - \hat{\xi}_{(-)} {\bf f}_r + \hat{\xi}_{(+)} \hat{\xi}_{(-)} \left( {\bf q}_r - {\bf q}_r \right)}{\hat{\xi}_{(+)} - \hat{\xi}_{(-)}}, \end{align}

where

\begin{equation} \hat{\xi}_{(-)} = \min(0, \xi_{(-)}), \qquad \hat{\xi}_{(+)} = \max(0, \xi_{(+)}). \end{equation}

Again it should be noted that, in general, ${\bf f}_m \ne {\bf f}({\bf q}_m)$.

We still need some way to compute the wave speeds $\xi_{(\pm)}$. The simplest method is to make them as large as possible, compatible with stability. This means (via the CFL condition) setting

\begin{equation} -\xi_{(-)} = \xi_{(+)} = \frac{\Delta x}{\Delta t} \end{equation}

which implies that (as the central state is now guaranteed to include the origin, as the waves have different signs)

\begin{equation} {\bf f}^* = \frac{1}{2} \left( {\bf f}_l + {\bf f}_r + \frac{\Delta x}{\Delta t} \left[ {\bf q}_l - {\bf q}_r \right] \right). \end{equation}

This is the Lax-Friedrichs flux, as used in HyperPython. We can also easily see how the local Lax-Friedrichs method, used in lesson 3 of HyperPython, comes about: simply choose

\begin{equation} -\xi_{(-)} = \xi_{(+)} = \alpha = \min \left( \left| \lambda \left( {\bf q}_l \right) \right|, \left| \lambda \left( {\bf q}_r \right) \right| \right) \end{equation}

to get

\begin{equation} {\bf f}^* = \frac{1}{2} \left( {\bf f}_l + {\bf f}_r + \alpha \left[ {\bf q}_l - {\bf q}_r \right] \right). \end{equation}

HLL type methods are straightforward to use but typically do not capture linear waves, such as the contact wave in the Euler equations, well. Extending the HLL method by including more waves is possible (see the HLLC method in Toro's book as an example), but rapidly increases the complexity of the solver.