In [1]:
from IPython.core.display import HTML
css_file = '../ipython_notebook_styles/ngcmstyle.css'
HTML(open(css_file, "r").read())
Out[1]:
Directly relevant background material can be found in eg David Ketcheson's HyperPython notebooks.
In [2]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Check that you are happy that the solution to the advection equation
$$ \begin{equation} \partial_t q + \partial_x (v q) = \partial_t q + \partial_x f(q) = 0 \end{equation} $$in the absence of boundaries is given by $q(x, t) = q(x - vt, 0)$.
Fix the domain to be $x \in [-1, 1]$. Set $q(x, t) = q_0(x)$ where
$$ \begin{equation} q_0(x) = \begin{cases} \tfrac{1}{6} \left(G(x, z - \delta) + G(x, z + \delta) + 4 G(x, z) \right), & -0.8 \le x \le -0.6, \\ 1, & -0.4 \le x \le -0.2, \\ 1 - | 10 (x - 0.1) |, & 0 \le x \le 0.2, \\ \tfrac{1}{6} \left(F(x, a - \delta) + F(x, a + \delta) + 4 F(x, a) \right), & 0.4 \le x \le 0.6, \\ 0, & \text{otherwise}, \end{cases} \end{equation} $$where
$$ \begin{align} a &= 0.5, \\ z &= -0.7, \\ \delta &= 0.005, \\ G(x, s) & = \exp \left[ -\frac{\log(2)}{36 \delta^2} \left( x - s \right)^2 \right], \\ F(x, s) & = \sqrt{\max \left[ 1 - 100 (x - s)^2, 0 \right]}. \end{align} $$Plot the initial condition.
In [3]:
a = 0.5
z = -0.7
delta = 0.005
In [4]:
def G(x, s):
return np.exp(-np.log(2.0) / (36.0 * delta**2) * (x - s)**2)
def F(x, s):
return np.sqrt(np.max([1.0 - 100.0 * (x - s)**2, 0.0]))
In [5]:
def q0(x):
q = np.zeros_like(x)
for i, xx in enumerate(x):
if (-0.8 <= xx) and (xx <= -0.6):
q[i] = (G(xx, z - delta) + G(xx, z + delta) + 4.0 * G(xx, z)) / 6.0
elif (-0.4 <= xx) and (xx <= -0.2):
q[i] = 1.0
elif (0.0 <= xx) and (xx <= 0.2):
q[i] = 1.0 - np.abs(10.0 * (xx - 0.1))
elif (0.4 <= xx) and (xx <= 0.6):
q[i] = (F(xx, a - delta) + F(xx, a + delta) + 4.0 * F(xx, a)) / 6.0
else:
q[i] = 0.0
return q
In [6]:
x = np.linspace(-1, 1, 1000)
plt.plot(x, q0(x))
plt.xlabel(r"$x$")
plt.ylim(-0.05, 1.05);
Optional, but very useful: Using JSAnimation
, or the widgets, or similar, produce an animation of the solution with time, assuming periodic boundary conditions. Use $v = 1$.
In [7]:
from JSAnimation.IPython_display import display_animation
from matplotlib import animation
In [8]:
fig = plt.figure()
ax = plt.axes(xlim=(-1,1), ylim=(-0.05,1.05))
line = ax.plot([], [])[0]
t_end = 2.0 # Domain has length two, so this is one rotation.
v = 1.0
nsteps = 100
def animate(i):
# Find time
t = t_end * i / (nsteps-1)
xhat = x - v * t # The effective spatial location at the new time.
xhat[xhat < -1.0] += 2.0 # Shift for the periodic boundary: this assumes v > 0, less than one period!
line.set_data(x, q0(xhat))
animation.FuncAnimation(fig, animate, frames=nsteps, interval=100)
Out[8]:
Write an FTBS algorithm to solve the advection equation. Remember, FTBS has
$$ \begin{equation} q^{n+1}_{i} = q^n_i - \frac{\Delta t}{\Delta x} \left( f^n_i - f^n_{i-1} \right). \end{equation} $$Remember the CFL condition: $\Delta t = \lambda \Delta x$ where $\lambda < 1 / v$ for stability.
In [ ]:
The exact solution to the Riemann problem for the advection equation is
$$ \begin{equation} q^* = \begin{cases} q^{(L)} & v > 0, \\ q^{(R)} & v < 0 \end{cases} \end{equation} $$along the line $x / t = 0$ (ie, along $x = 0$). Thus, when $v > 0$, the inter-cell flux is given by
$$ \begin{equation} f_{i + 1/2}(q^{(L)}_{i+1/2}, q^{(R)}_{i+1/2}) = v q^{(L)}_{i+1/2}. \end{equation} $$For Godunov's method, we have $q^{(R)}_{i-1/2} = \hat{q}_i = q^{(L)}_{i+1/2}$.
Convince yourself of this by drawing a picture of the cell.
Implement Godunov's method for the problem above.
Remember that Godunov's method uses
$$ \begin{equation} \hat{q}^{n+1}_i = \hat{q}^n_i + \frac{\Delta t}{\Delta x} \left[ f_{i-1/2} \left( \hat{q}^n_{i-1}, \hat{q}^n_i \right) - f_{i+1/2} \left( \hat{q}^n_{i}, \hat{q}^n_{i+1} \right) \right]. \end{equation} $$Repeat the convergence analysis.
In [ ]:
To go beyond first order accuracy we need two things.
MUSCL type methods approximate the data as piecewise linear. So
$$ \begin{equation} q(x) = \hat{q}_i + \sigma_i (x - x_i), \end{equation} $$where $\sigma_i$ approximates the slope of the data within cell $i$. Then
$$ \begin{align} q^{(L)}_{i+1/2} & = \hat{q}_i + \sigma_i \frac{\Delta x}{2}, \\ q^{(R)}_{i-1/2} & = \hat{q}_i - \sigma_i \frac{\Delta x}{2}. \end{align} $$We can also use the Runge-Kutta methods in time. For example, RK2 would give
$$ \begin{align} \hat{q}^*_i & = \hat{q}^n_i - \frac{\Delta t}{\Delta x} \left( f^n_{i+1/2} - f^n_{i-1/2} \right) \\ \hat{q}^{n+1}_i & = \frac{1}{2} \hat{q}^n_i + \frac{1}{2}\left( \hat{q}^*_i - \frac{\Delta t}{\Delta x} \left( f^*_{i+1/2} - f^*_{i-1/2} \right) \right). \end{align} $$Use central differencing, $\sigma_i = \left( \hat{q}_{i+1} - \hat{q}_{i-1} \right) / (2 \Delta x)$ to approximate the slope.
Implement this unlimited scheme and repeat the above calculation for a single resolution. What do you see?
In [ ]:
Of course, the problem is the Gibbs' oscillations near the discontinuities. Instead the approximation to the slope should be limited. Try the minmod
slope:
Let $\Delta q_{i-1/2} = \hat{q}_i - \hat{q}_{i-1}$. Then
$$ \begin{align} \sigma_i & = \text{minmod}(\Delta q_{i-1/2}, \Delta q_{i+1/2})/\Delta x \\ & = \begin{cases} \min(\Delta q_{i-1/2}, \Delta q_{i+1/2})/\Delta x & \text{ if } \Delta q_{i-1/2}, \Delta q_{i+1/2} > 0 \\ \max(\Delta q_{i-1/2}, \Delta q_{i+1/2})/\Delta x & \text{ if } \Delta q_{i-1/2}, \Delta q_{i+1/2} < 0 \\ 0 & \text{ if } \Delta q_{i-1/2} \cdot \Delta q_{i+1/2} < 0. \end{cases} \end{align} $$Implement this limited scheme and repeat the above test, including the convergence analysis. Is the convergence rate first order, second order, or something in between? Is this what you expect?
In [18]:
For Burger's equation
$$ \begin{equation} \partial_t q + \partial_x f(q) = \partial_t q + \partial_x \left( \tfrac{1}{2} q^2 \right) = 0 \end{equation} $$the flux $f = q^2 / 2$, giving a nonlinear PDE. There is an exact solution, but it is rarely useful.
Check that you are happy that the exact solution to the Riemann problem along $x = 0$ is
$$ \begin{equation} q^* = \begin{cases} q^{(L)} & 0 < q^{(L)} < q^{(R)}, \\ 0 & q^{(L)} < 0 < q^{(R)}, \\ q^{(R)} & q^{(L)} < q^{(R)} < 0, \\ q^{(L)} & q^{(L)} > q^{(R)} \quad \text{and} \quad q^{(L)} + q^{(R)} > 0,\\ q^{(R)} & q^{(L)} > q^{(R)} \quad \text{and} \quad q^{(L)} + q^{(R)} < 0. \end{cases} \end{equation} $$Implement Godunov's method applied to Burger's equation. Using the domain $x \in [-1, 1]$ and evolving to $t = 0.5$, try solving the initial data
$$ \begin{align} q(x, 0) & = \sin(2 \pi x) , \\ q(x, 0) &= \begin{cases} 1 & |x| < \tfrac{1}{3}, \\ -1 & \text{otherwise}. \end{cases} \end{align} $$
In [18]:
Repeat the exercise with the MUSCL scheme.
In [18]: