In Part II of this book we present a number of approximate Riemann solvers. We have already seen that for many important hyperbolic systems it is possible to work out the exact Riemann solution for arbitrary left and right states. However, for complicated nonlinear systems, such as the Euler equations (see Euler.ipynb), this exact solution can only be determined by solving a nonlinear system of algebraic equations for the intermediate states and the waves that connect them. This can be done to arbitrary precision, but only at some computational expense. The cost of exactly solving a single Riemann problem may seem insignificant, but it can become prohibitively expensive when the Riemann solver is used as a building block in a finite volume method. In this case a Riemann problem must be solved at every cell edge at every time step.
For example, if we consider a very coarse grid in one space dimension with only 100 cells and take 100 time steps, then 10,000 Riemann problems must be solved. In solving practical problems in two or three space dimensions it is not unusual to require the solution of billions or trillions of Riemann problems. In this context it can be very important to develop efficient approximate Riemann solvers that quickly produce a sufficiently good approximation to the true Riemann solution.
The following points have helped to guide the development of approximate Riemann solvers:
If the solution is smooth over much of the domain, then the jump in states between neighboring cells will be very small (on the order of $\Delta x$, the cell size) for most of the Riemann problems encountered in the numerical solution. Even if the hyperbolic system being studied is nonlinear, for such data the equations can be approximated by a linearization and we have seen that linear Riemann problems can be solved more easily than nonlinear ones. Rather than solving a nonlinear system of equations by some iterative method, one need only solve a linear system (provided the eigenvalues and eigenvectors of the Jacobian matrix are known analytically, as they often are for practical problems). In many cases the solution of this linear system can also be worked out analytically and is easy to implement, so no linear algebra is required.
In spite of smoothness over much of the domain, in interesting problems there are often isolated discontinuities such as shock waves that are important to model accurately. So some Riemann problems arising in a finite volume method may have large jumps between the left and right states. Hence a robust approximate Riemann solver must also handle these cases without introducing too much error.
But even in the case of large jumps in the data, it may not be necessary or worthwhile to solve the Riemann problem exactly. The information produced by the Riemann solver goes into a numerical method that updates the approximate solution in each grid cell and the exact structure of the Riemann solution is lost in the process.
Each chapter in this part of the book illustrates some common approximate Riemann solvers in the context of one of the nonlinear systems studied in part 1. We focus on two popular approaches to devising approximate Riemann solvers, though these are certainly not the only approaches: linearized solvers and two-wave solvers.
We consider a single Riemann problem with left state $q_l$ and right state $q_r$. Then the Riemann solution gives a resolution of the jump $\Delta q = (q_r - q_l)$ into a set of propagating waves. In both of the approaches described below, the approximate Riemann solution consists entirely of traveling discontinuities. Following (LeVeque, 2002), we refer to these traveling discontinuities as waves, and denote them by ${\cal W}^p \in {\mathbb R}^m$, where the index $p$ denotes the characteristic family and typically ranges from $1$ to $m$ for a system of $m$ equations, although in an approximate solver the number of waves may be smaller. For each wave, the approximate solver must also give a wave speed $s^p \in{\mathbb R}$.
The net effect of each wave from the Riemann solution is given by the product of the wave and its speed. In Godunov's method, the simplest finite volume method based on Riemann solvers, the rate of change of the cell average to the left of a given interface is given by
$${\cal A}^-\Delta Q = \sum_{p=1}^m (s^p)^- {\cal W}^p$$
and that to the right is given by
$${\cal A}^+\Delta Q = \sum_{p=1}^m (s^p)^+ {\cal W}^p$$
where $(x)^- = \min(x,0)$ and $(x)^+ = \max(x,0)$. We refer to ${\cal A}^-\Delta Q, ~\apdq \in {\mathbb R}^m$ as fluctuations.
The purpose of an approximate Riemann solver, in this context, is to determine the waves ${\cal W}^p$ and speeds $s^p$; these can then be used to update the numerical solution via the fluctuations.
Note that ${\cal A}^-\Delta Q$, for example, should be read as a single symbol and denotes the m-vector defined above. The notation is motivated by the case of a constant-coefficient linear hyperbolic system of equations with flux function $f(q) = Aq$ for which $A = R\Lambda R^{-1}$ with real eigenvalues $\lambda_p$. In this case, one can define a splitting $A = A^- + A^+$ in which $A^+ = R\Lambda^+ R^{-1}$ and $A^- = R\Lambda^- R^{-1}$, where the diagonal matrix $\Lambda^-$ has diagonal elements $\min(\lambda_p, 0)$ while $\Lambda^+$ has diagonal elements $\max(\lambda_p, 0)$. In this case the wave speeds are $s^p = \lambda_p$ and the fluctuations reduce to $${\cal A}^-\Delta Q = A^-(q_r - q_l), \qquad\text{and}\qquad \apdq = A^+(q_r - q_l).$$
A common, but different, viewpoint is that the Riemann solver must produce an approximation of the flux at $x=0$, and Godunov's method is defined by differencing the flux at the left edge and right edge of the grid cell in each time step. For a conservation law $q_t + f(q)_x = 0$, we can define the interface flux based on the fluctuations via $$F = f(q_l) + {\cal A}^-\Delta Q \qquad\text{or} \qquad F = f(q_r) - \apdq.$$ These two expressions are equal provided that \begin{align} \label{adqdf} {\cal A}^-\Delta Q + \apdq = f(q_r) - f(q_l), \end{align} which is a natural requirement to impose on the fluctuations and is satisfied, for example, in the case of the linear problem described above.
An advantage of working with fluctuations, waves, and speeds rather than interface fluxes is that these quantities often make sense also for non-conservative hyperbolic systems, such as the variable coefficient linear problem $q_t + A(x)q_x = 0$. A Riemann problem is defined by prescribing matrices $A_l$ and $A_r$ along with the initial data $q_l$ and $q_r$, for example by using the material properties in the grid cells to the left and right of the interface for acoustics through a heterogeneous material. The waves are then naturally defined using the eigenvectors of $A_l$ corresponding to negative eigenvalues for the left-going waves, and using eigenvectors of $A_r$ corresponding to positive eigenvalues for the right-going waves. See (LeVeque, 2002) for more details. [notebook for variable coefficient acoustics?]
Consider a nonlinear system $q_t + f(q)_x = 0$. If $q_l$ and $q_r$ are close to each other, as is often the case over smooth regions of a more general solution, then the nonlinear system can be approximated by a linear problem of the form $q_t + \hat A q_x = 0$. The coefficient matrix $\hat A$ should be some approximation to $f'(q_l) \approx f'(q_r)$ in the case where $\|q_l-q_r\|$ is small. The idea of a general linearized Riemann solver is to define a matrix $\hat A(q_l, q_r)$ that has this property but also makes sense as an approximation in the case when $\|q_l-q_r\|$ is not small. For many nonlinear systems there is a Roe linearization, a particular function that works works very well based on ideas introduced originally by Philip Roe (Roe, 1981). For systems such as the shallow water equations or the Euler equations, there are closed-form expressions for the eigenvalues and eigenvectors of $\hat A$ and the solution of the linearized Riemann problem, leading to efficient solvers.
Since the Riemann solution impacts the overall numerical solution only based on how it modifies the two neighboring solution values, it seems reasonable to consider approximations in which only a single wave propagates in each direction. The solution will have a single intermediate state $q_m$ such that $\wave_1 = q_m - q_l$ and $\wave_2 = q_r-q_m$. There are apparently $m+2$ values to be determined: the middle state $q_m$ and the speeds $s_1, s_2$. In order for the method to be conservative, it must satisfy (\ref{adqdf}), and hence the $m$ conditions
\begin{align}
f(q_r) - f(q_l) = s_1 \wave_1 + s_2 \wave_2.
\end{align}
This can be solved for the middle state to find
\begin{align} \label{AS:middle_state}
q_m = \frac{f(q_r) - f(q_l) - s_2 q_r + s_1 q_l}{s_1 - s_2}.
\end{align}
It remains only to specify the wave speeds, and it is in this specification that the various two-wave solvers differ. In the following sections we briefly discuss the choice of wave speed for a scalar problem; the choice for systems will be elaborated in subsequent chapters.
In addition to the names and references provided below, let us mention that this class of solvers is an ingredient in the so-called central schemes. Due to the extreme simplicity of two-wave solvers, the resulting central schemes are often even referred to as being "Riemann-solver-free".
The simplest such solver is the Lax-Friedrichs method, in which it is assumed that both waves have the same speed: $$-s_1 = s_2 = a,$$ where $a\ge 0$. Then (\ref{AS:middle_state}) becomes $$q_m = -\frac{f(q_r) - f(q_l)}{2a} + \frac{q_r + q_l}{2}.$$ In the original Lax-Friedrichs method, the wave speed $a$ is taken to be the same in every Riemann problem over the entire grid; in the local Lax Friedrichs method, a different speed $a$ may be chosen for each Riemann problem.
For stability reasons, the wave speed should be chosen at least as large as the fastest wave speed appearing in the true Riemann solution. However, choosing a wave speed that is too large leads to diffusion. For the LLF method (originally due to Rusanov), the wave speed is chosen as
$$a(q_r, q_l) = \max(|f'(q)|)$$
where the maximum is taken over all values of $q$ between $q_r$ and $q_l$. This ensures stability, but may introduce substantial damping of slower waves.
A less dissipative solver can be obtained by allowing the left- and right-going waves to have different speeds.
This approach was developed in (Harten, Lax, and van Leer). The solution is then determined by (\ref{AS:middle_state}). In the original HLL solver, it was suggested to again to use speeds that bound the possible speeds occurring in the true solution. For a scalar problem, this translates to
\begin{align*}
s_1 & = \min(f'(q)) \\
s_2 & = \max(f'(q)),
\end{align*}
where again the minimum and maximum are taken over all values between $q_r$ and $q_l$. Many refinements of this choice have been proposed in the context of systems of equations. Again, these will be discussed in later chapters.