The data file needed for this tutorial can be downloaded as follows:
In [ ]:
!wget https://raw.githubusercontent.com/rodluger/tutorials/master/gps/data/sample_transit.txt
!mv *.txt data/
In [ ]:
import numpy as np
from scipy.linalg import cho_factor
def ExpSquaredKernel(t1, t2=None, A=1.0, l=1.0):
"""
Return the ``N x M`` exponential squared
covariance matrix between time vectors `t1`
and `t2`. The kernel has amplitude `A` and
lengthscale `l`.
"""
if t2 is None:
t2 = t1
T2, T1 = np.meshgrid(t2, t1)
return A ** 2 * np.exp(-0.5 * (T1 - T2) ** 2 / l ** 2)
def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0):
"""
Return the log of the GP likelihood of the
data `y(t)` given uncertainty `sigma` and
an Exponential Squared Kernel with amplitude `A`
and length scale `sigma`.
"""
# The covariance and its determinant
npts = len(t)
kernel = ExpSquaredKernel
K = kernel(t, A=A, l=l) + sigma ** 2 * np.eye(npts)
# The marginal log likelihood
log_like = -0.5 * np.dot(y.T, np.linalg.solve(K, y))
log_like -= 0.5 * np.linalg.slogdet(K)[1]
log_like -= 0.5 * npts * np.log(2 * np.pi)
return log_like
def draw_from_gaussian(mu, S, ndraws=1, eps=1e-12):
"""
Generate samples from a multivariate gaussian
specified by covariance ``S`` and mean ``mu``.
(We derived these equations in Day 1, Notebook 01, Exercise 7.)
"""
npts = S.shape[0]
L, _ = cho_factor(S + eps * np.eye(npts), lower=True)
L = np.tril(L)
u = np.random.randn(npts, ndraws)
x = np.dot(L, u) + mu[:, None]
return x.T
def compute_gp(t_train, y_train, t_test, sigma=0, A=1.0, l=1.0):
"""
Compute the mean vector and covariance matrix of a GP
at times `t_test` given training points `y_train(t_train)`.
The training points have uncertainty `sigma` and the
kernel is assumed to be an Exponential Squared Kernel
with amplitude `A` and lengthscale `l`.
"""
# Compute the required matrices
kernel = ExpSquaredKernel
Stt = kernel(t_train, A=1.0, l=1.0)
Stt += sigma ** 2 * np.eye(Stt.shape[0])
Spp = kernel(t_test, A=1.0, l=1.0)
Spt = kernel(t_test, t_train, A=1.0, l=1.0)
# Compute the mean and covariance of the GP
mu = np.dot(Spt, np.linalg.solve(Stt, y_train))
S = Spp - np.dot(Spt, np.linalg.solve(Stt, Spt.T))
return mu, S
Let's time how long our custom implementation of a GP takes for a rather long dataset. Create a time array of 10,000
points between 0 and 10 and time how long it takes to sample the prior of the GP for the default kernel parameters (unit amplitude and timescale). Add a bit of noise to the sample and then time how long it takes to evaluate the log likelihood for the dataset. Make sure to store the value of the log likelihood for later.
Let's time how long it takes to do the same operations using the george
package (pip install george
).
The kernel we'll use is
kernel = amp ** 2 * george.kernels.ExpSquaredKernel(tau ** 2)
where amp = 1
and tau = 1
in this case.
To instantiate a GP using george
, simply run
gp = george.GP(kernel)
The george
package pre-computes a lot of matrices that are re-used in different operations, so before anything else, ask it to compute the GP model for your timeseries:
gp.compute(t, sigma)
Note that we've only given it the time array and the uncertainties, so as long as those remain the same, you don't have to re-compute anything. This will save you a lot of time in the long run!
Finally, the log likelihood is given by gp.log_likelihood(y)
and a sample can be drawn by calling gp.sample()
.
How do the speeds compare? Did you get the same value of the likelihood (assuming you computed it for the same sample in both cases)?
george
offers a fancy GP solver called the HODLR solver, which makes some approximations that dramatically speed up the matrix algebra. Instantiate the GP object again by passing the keyword solver=george.HODLRSolver
and re-compute the log likelihood. How long did that take?
(I wasn't able to draw samples using the HODLR solver; unfortunately this may not be implemented.)
The george
package is super useful for GP modeling, and I recommend you read over the docs and examples. It implements several different kernels that come in handy in different situations, and it has support for multi-dimensional GPs. But if all you care about are GPs in one dimension (in this case, we're only doing GPs in the time domain, so we're good), then celerite
is what it's all about:
pip install celerite
Check out the docs here, as well as several tutorials. There is also a paper that discusses the math behind celerite
. The basic idea is that for certain families of kernels, there exist extremely efficient methods of factorizing the covariance matrices. Whereas GP fitting typically scales with the number of datapoints $N$ as $N^3$, celerite
is able to do everything in order $N$ (!!!) This is a huge advantage, especially for datasets with tens or hundreds of thousands of data points. Using george
or any homebuilt GP model for datasets larger than about 10,000
points is simply intractable, but with celerite
you can do it in a breeze.
Repeat the timing tests, but this time using celerite
. Note that the Exponential Squared Kernel is not available in celerite
, because it doesn't have the special form needed to make its factorization fast. Instead, use the Matern 3/2
kernel, which is qualitatively similar, and which can be approximated quite well in terms of the celerite
basis functions:
kernel = celerite.terms.Matern32Term(np.log(1), np.log(1))
Note that celerite
accepts the log of the amplitude and the log of the timescale. Other than this, you should be able to compute the likelihood and draw a sample with the same syntax as george
.
How much faster did it run?
Let's use celerite
for a real application: fitting an exoplanet transit model in the presence of correlated noise.
Below is a (fictitious) light curve for a star with a transiting planet. There is a transit visible to the eye at $t = 0$, which (say) is when you'd expect the planet to transit if its orbit were perfectly periodic. However, a recent paper claims that the planet shows transit timing variations, which are indicative of a second, perturbing planet in the system, and that a transit at $t = 0$ can be ruled out at 3 $\sigma$. Your task is to verify this claim.
Assume you have no prior information on the planet other than the transit occurs in the observation window, the depth of the transit is somewhere in the range $(0, 1)$, and the transit duration is somewhere between $0.1$ and $1$ day. You don't know the exact process generating the noise, but you are certain that there's correlated noise in the dataset, so you'll have to pick a reasonable kernel and estimate its hyperparameters.
Fit the transit with a simple inverted Gaussian with three free parameters:
def transit_shape(depth, t0, dur):
return -depth * np.exp(-0.5 * (t - t0) ** 2 / (0.2 * dur) ** 2)
Read the celerite docs to figure out how to solve this problem efficiently.
HINT: I borrowed heavily from this tutorial, so you might want to take a look at it...
In [19]:
import matplotlib.pyplot as plt
t, y, yerr = np.loadtxt("data/sample_transit.txt", unpack=True)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.xlabel("time")
plt.ylabel("relative flux");