In the groundwater flow notebook we considered the differential equation controlling groundwater flow which relates the rate of change in $h$, the hydraulic head (a measure of pressure) to spatial gradients in the same quantity.
$$ S \frac{\partial h}{\partial t} + H = K(x) \frac{\partial^2 h}{\partial x^2} + \frac{\partial h}{\partial x}\frac{\partial K(x)}{\partial x} $$We ignored $H$ and solved a number of examples.
You probably noticed that we used an unstable method for calculating the time progression of the pressure, $h$. Given all the trouble we went to exploring better ways to do this, don't you think it would be a good idea to try out the Runge-Kutta 2nd order update ?
This is not very complicated to do so I would like you to try it for the case where $K$ is constant.
You have most of the code you need. The gradx
function will still be useful ... in fact you really only have to replace the ## timestepping loop
.
At the moment it looks like this:
\begin{equation} h(t+\Delta t) \approx h(t) + \Delta t \left( \frac{\Delta \left( \frac{\kappa \Delta h}{\Delta x} \right)}{ \Delta x} \right) \end{equation}and you need to code up this:
\begin{equation} h^*(t+\Delta t /2 ) \approx h(t) + \frac{\Delta t}{2} \left( \frac{\Delta \left( \frac{\kappa \Delta h}{\Delta x} \right)}{ \Delta x} \right) \end{equation}and
\begin{equation} h(t+\Delta t) \approx h(t) + \Delta t \left( \frac{\Delta \left( \frac{\kappa \Delta h^*}{\Delta x} \right)}{ \Delta x} \right) \end{equation}
In [ ]:
## Your code here
We know that there is an instability in the code which occurs if the timestep is too large. Did the "improved" timestepping work ? Use the error checking approach we ran through in the in-class exercise to see if there is any improvement, no improvement or does this make things worse.
In [ ]:
## Your test code here
We used the spline interpolation routines from scipy.interpolate
to compute smooth interpolating functions and derivatives. Try this as an alternative method of computing the second derivative of $h$ in the equations. 1) Does this give better derivatives (compute actual error v. the previous finite difference gradient functiono). Does this make for a more stable solution ?
In [ ]: