We're now going to use finite differences to find a black hole apparent horizon.
The spacetime we're going to look at is simplified:
We then compute the expansion of outgoing null geodesics, and look for where this vanishes. The surface with radius $h(\theta)$ where this occurs is the apparent horizon. With our assumptions, $h$ obeys the boundary value problem
$$ \frac{d^2 h}{d \theta^2} = 2 h + \frac{3}{h} \left( \frac{d h}{d \theta} \right)^2 + f \left( \theta, h, \frac{d h}{d \theta} \right), \qquad \frac{d h}{d \theta} ( \theta = 0 ) = 0 = \frac{d h}{d \theta} ( \theta = \pi/2 ). $$The function $f$ encodes the spacetime effects due to the singularities.
To solve this problem we convert to first order form. Introduce the vector
$$ {\bf H} = \begin{pmatrix} h \\ \frac{d h}{d \theta} \end{pmatrix}. $$Then we have the problem
$$ \frac{d}{d \theta} {\bf H} = {\bf F}({\bf H}, \theta) = \begin{pmatrix} H_2 \\ 2 H_1 + \frac{3}{H_1} H_2^2 + f(\theta, {\bf H}) \end{pmatrix}, \qquad H_2(\theta = 0) = 0 = H_2(\theta = \pi/2). $$We'll give the entire right-hand-side as code:
In [ ]:
import numpy
from matplotlib import pyplot
%matplotlib notebook
In [ ]:
def horizon_RHS(H, theta, z_singularities):
"""
The RHS function for the apparent horizon problem.
Parameters
----------
H : array
vector [h, dh/dtheta]
theta : double
angle
z_singularities : array
Location of the singularities on the z axis; non-negative
Returns
-------
dHdtheta : array
RHS
"""
assert(numpy.all(numpy.array(z_singularities) >= 0.0)), "Location of singularities cannot be negative"
h = H[0]
dh = H[1]
psi = 1.0
dpsi_dr = 0.0
dpsi_dtheta = 0.0
for z in z_singularities:
distance = numpy.sqrt((h*numpy.sin(theta))**2 + (h*numpy.cos(theta) - z)**2)
psi += 0.5/distance
dpsi_dr -= 0.5*(h-z*numpy.cos(theta))/distance**3
dpsi_dtheta -= 0.5**h*z*numpy.sin(theta)/distance**3
# Apply reflection symmetry
if z > 0.0:
distance = numpy.sqrt((h*numpy.sin(theta))**2 + (h*numpy.cos(theta) + z)**2)
psi += 0.5/distance
dpsi_dr -= 0.5*(h+z*numpy.cos(theta))/distance**3
dpsi_dtheta += 0.5**h*z*numpy.sin(theta)/distance**3
C2 = 1.0 / (1.0 + (dh / h)**2)
# Impose that the term involving cot(theta) vanishes on axis.
if (abs(theta) < 1e-16) or (abs(theta - numpy.pi) < 1e-16):
cot_theta_dh_C2 = 0.0
else:
cot_theta_dh_C2 = dh / (numpy.tan(theta) * C2)
dHdtheta = numpy.zeros_like(H)
dHdtheta[0] = dh
dHdtheta[1] = 2.0*h - cot_theta_dh_C2 + 4.0*h**2/(psi*C2)*(dpsi_dr - dpsi_dtheta*dh/h**2) + 3.0*dh**2/h
return dHdtheta
We now need to solve the boundary value problem. We'll do this using shooting.
If we knew the initial radius of the horizon, $h(\theta = 0) = H_1(\theta = 0) = h_0$, we could solve the initial value problem
$$ \frac{d}{d \theta} {\bf H} = {\bf F}({\bf H}, \theta) = \begin{pmatrix} H_2 \\ 2 H_1 + \frac{3}{H_1} H_2^2 + f(\theta, {\bf H}) \end{pmatrix}, \qquad {\bf H}(\theta = 0) = \begin{pmatrix} h_0 \\ 0 \end{pmatrix}. $$For example, the simple Schwarzschild black hole will have $h_0 = 1/2$, in this slicing.
To solve the initial value problem we can re-use our finite differencing algorithms. For example, we evaluate the initial value problem equation at $\theta_i$ using forward differencing, to get
\begin{align} \left. \frac{d}{d \theta} {\bf H} \right|_{\theta = \theta_i} & \approx \frac{1}{\Delta \theta} \left( {\bf H}^{(i+1)} - {\bf H}^{(i)} \right) \\ & = {\bf F}({\bf H}^{(i)}, \theta_i), \end{align}where we have denoted ${\bf H}(\theta_i) \equiv {\bf H}^{(i)}$. We then re-arrange this to get Euler's method
$$ {\bf H}^{(i+1)} = {\bf H}^{(i)} + \Delta \theta \, {\bf F}({\bf H}^{(i)}, \theta_i). $$We can use this to solve for the Schwarzschild case:
In [ ]:
def euler_step(Hi, theta_i, dtheta, z_singularity):
"""
Euler's method - one step
"""
#
In [ ]:
Ntheta = 100
z_singularity = [0.0]
theta = numpy.linspace(0.0, numpy.pi/2.0, Ntheta)
dtheta = theta[1] - theta[0]
H = numpy.zeros((2, Ntheta))
H[:, 0] = [0.5, 0.0]
for i in range(Ntheta-1):
H[:, i+1] = euler_step(H[:, i], theta[i], dtheta, z_singularity)
In [ ]:
pyplot.figure()
pyplot.polar(theta, H[0,:])
pyplot.show()
We see that this has worked nicely. However, Euler's method is very inaccurate on more complex problems, as it's only first order convergent. We would like to use a higher order method.
When looking at central differencing earlier we used information from both sides of the point where we took the derivative. This gives higher accuracy, but isn't helpful in the initial value case, where we don't have half the information.
Instead, we use many Euler steps combined. Each one gives an approximation to "future" data, which can be used to approximate the derivative at more locations.
For example, the Euler step above starts from ${\bf H}^{(i)}$ and computes ${\bf F}^{(i)}$ to approximate ${\bf H}^{(i+1)}$. We can use this approximation to give us ${\bf F}^{(i+1)}$.
Now, a more accurate solution would be
$$ {\bf H}^{(i+1)} = {\bf H}^{(i)} + \int_{\theta_i}^{\theta_{i+1}} \text{d} \theta \, {\bf F}({\bf H}, \theta). $$In Euler's method we are effectively representing the value of the integral by the value of the integrand at the start, multiplied by the width $\Delta \theta$. We could now approximate it by the average value of the integrand, $({\bf F}^{(i)} + {\bf F}^{(i+1)})/2$, multiplied by the width $\Delta \theta$. This gives the algorithm
\begin{align} {\bf H}^{(p)} &= {\bf H}^{(i)} + \Delta \theta \, {\bf F}({\bf H}^{(i)}, \theta_i), \\ {\bf H}^{(i+1)} &= {\bf H}^{(i)} + \frac{\Delta \theta}{2} \left( {\bf F}({\bf H}^{(i)}, \theta_i) + {\bf F}({\bf H}^{(p)}, \theta_{i+1}) \right) \\ &= \frac{1}{2} \left( {\bf H}^{(i)} + {\bf H}^{(p)} + \Delta \theta \, {\bf F}({\bf H}^{(p)}, \theta_{i+1}) \right). \end{align}The final re-arrangement ensures we do not have to store or re-compute ${\bf F}^{(i)}$. This is one of the Runge-Kutta methods. This version is second order accurate, and a big improvement over Euler's method.
In [ ]:
def rk2_step(Hi, theta_i, dtheta, z_singularity):
"""
RK2 method - one step
"""
#
In [ ]:
H = numpy.zeros((2, Ntheta))
H[:, 0] = [0.5, 0.0]
for i in range(Ntheta-1):
H[:, i+1] = rk2_step(H[:, i], theta[i], dtheta, z_singularity)
In [ ]:
pyplot.figure()
pyplot.polar(theta, H[0,:])
pyplot.show()
We still can't find a horizon unless we know the initial radius. However, there is a way around this. Let's see what happens if we compute in the Schwarzschild case, using the wrong initial data:
In [ ]:
initial_guesses = numpy.linspace(0.4, 0.6, 10)
solutions = []
z_singularity = [0.0]
for h0 in initial_guesses:
#
pyplot.figure()
for r in solutions:
pyplot.polar(theta, r)
pyplot.show()
We see that the the surfaces that start with the radius too small are curving back in; their derivative is negative. The surfaces with radius too large are diverging; their derivative is positive. We know that the true solution has vanishing derivative.
Let's explicitly plot the derivative at the endpoint.
In [ ]:
initial_guesses = numpy.linspace(0.4, 0.6, 100)
dhdtheta_end = numpy.zeros_like(initial_guesses)
z_singularity = [0.0]
for guess, h0 in enumerate(initial_guesses):
#
pyplot.figure()
pyplot.plot(initial_guesses, dhdtheta_end)
pyplot.xlabel(r"$h_0$")
pyplot.ylabel(r"$dh/d\theta(\pi/2)$")
pyplot.show()
We see that the derivative vanishes precisely where the horizon should be, exactly as expected.
This also gives us a way of solving for the apparent horizon. We want to solve the equation
$$ R(h_0) = 0. $$The function $R$ is given by $R(h_0) = H_2(\pi/2 ; h_0)$. In other words, we
We can code this residual function as
In [ ]:
def residual(h0, z_singularities):
"""
The residual function for the shooting method.
"""
#
Finally, we need to find the root of this equation.
Problems where we are given an algebraic, nonlinear function ${\bf R}$ and asked to find ${\bf x}$ such that ${\bf R}({\bf x}) = {\bf 0}$ are nonlinear root-finding problems. Many standard solution methods are based on Newton's algorithm:
Computing the derivative for the tangent in step 2 is slow; instead we use finite differencing again.
In one dimension, Newton's method is
$$ x^{(n+1)} = x^{(n)} - \frac{R(x^{(n)})}{R'(x^{(n)})}. $$Replacing the derivative with a finite difference gives
$$ x^{(n+1)} = x^{(n)} - \frac{R(x^{(n)}) \left( x^{(n)} - x^{(n-1)} \right)}{R(x^{(n)}) - R(x^{(n-1)})}. $$This is the secant method. It's much easier to implement, but requires two initial guesses.
In [ ]:
def secant(R, x0, x1, args, tolerance = 1e-10):
"""
Secant method
"""
#
We apply this to the Schwarzschild case:
In [ ]:
h0 = secant(residual, 0.4, 0.6, z_singularity)
print("Computed initial radius is {}".format(h0))
And from this we can compute the correct horizon.
What happens if we get the guess wildly wrong? In this simple case it will nearly always converge to the "right" answer, but in general a poor initial guess means the algorithm - or most root-finding algorithms! - won't converge.
We can now compute the more interesting binary black hole case, where the singularities are at $z = \pm 0.75$. Using the symmetry, we need:
In [ ]:
z_singularity = [0.75]
We can now check what sorts of initial radius $h_0$ will be needed for the horizon:
In [ ]:
initial_guesses = numpy.linspace(1.2, 1.4, 100)
dhdtheta_end = numpy.zeros_like(initial_guesses)
z_singularity = [0.75]
for guess, h0 in enumerate(initial_guesses):
#
pyplot.figure()
pyplot.plot(initial_guesses, dhdtheta_end)
pyplot.xlabel(r"$h_0$")
pyplot.ylabel(r"$dh/d\theta(\pi/2)$")
pyplot.show()
We see the algorithms are having problems for small radii, but that it suggests that the correct answer is roughly $h_0 \in [1.26, 1.3]$. So we use root-finding:
In [ ]:
h0 = secant(residual, 1.26, 1.3, z_singularity)
And finally, we compute and plot the horizon surface.
In [ ]:
z_singularity = [0.75]
H0 = [h0, 0.0]
H = numpy.zeros((2,Ntheta))
H[:, 0] = [h0, 0.0]
for i in range(Ntheta-1):
H[:, i+1] = rk2_step(H[:, i], theta[i], dtheta, z_singularity)
pyplot.figure()
pyplot.polar(theta, H[0,:])
pyplot.show()
We see (or can imagine) the stretched apparent horizon surrounding two black holes. See eg this longer code or this paper for comparison.
The results above used a fixed number of points to solve the initial value problem, and a fixed tolerance for the root find. Check how the solution varies as you change the accuracies of both. Which is the limiting factor? How much difference does it make to the computational cost?
The methods introduced above are basic and not particularly accurate or robust. Standard libraries will give both the solution of initial value problems and root-finding problems. Replace the implementation above with a version using odeint
from scipy.integrate
and brentq
from scipy.optimize
. Compare the performance and ease-of-implementation of both versions.