SLCosmo Round-trip Test

In this notebook we exercise SLCosmo's mock data generation and inference methods to simulate a TDC2 submission and infer a Hubble constant that we can compare with the input value. This is a functional test, in the sense that if our round trip does not get us back to where we started, there must be at least one bug in either or both of the mathematics or code.


In [1]:
%matplotlib inline
%load_ext autoreload
%autoreload 2

In [2]:
import desc.slcosmo

Generating a Mock TDC2 Submission


In [3]:
Lets = desc.slcosmo.SLCosmo()
Lets.make_some_mock_data(Nlenses=100, Nsamples=100, quad_fraction=0.17, stem='roundtrip')

This created a set of data files. Let's look at one of them.


In [4]:
!head -25 roundtrip_time_delays_13.txt


# Time Delay Challenge 2 Posterior Sample Time Delays
# 
# Notes:
# * Time delays should be given in days. Positive dt_AB means that light
#   curve A leads (not lags) light curve B.
# * Q is the H0-free time delay distance, a function of zd, zs and
#   cosmology. Q has units of km / s: D_dt = Q / H0
# * Fermat potential differences DeltaFP are given in units of
#   day km / s / Mpc, such that the predicted time delay is
#   dt = (Q / (c * H0)) * DeltaFP,  in days. c = 3.00e5 km/s
# 
# Q: 372470.809596
# DeltaFP_AB: 1173.08725886
# DeltaFP_AB_err: 45.8351946891
# 
#                  dt_AB
1.694453317114829360e+01
2.052459984793990699e+01
2.061604148874281606e+01
1.999738253927849385e+01
2.495935477471548580e+01
2.050670175419922003e+01
1.765368640716674520e+01
2.214577077226819668e+01
2.398151554400327257e+01

The TDC2 Probabilistic Model

The figure shows the probabilistic graphical model, derived in the draft TDC2 design paper, of a toy cosmological parameter inference given a set of strong lens measurements. We assume that we have, from high resolution imaging and spectroscopic follow-up, lens models that predict the Fermat potential differences $\Delta \phi$ between the image pairs in each of $K$ lens systems.

We treat the time delays for each lens as parameters whose posterior PDF (given the light curve data) has been characterized as a set of MCMC samples, the output from a Good Team's analysis of the system's light curves.

With this PDF in hand, our task is to infer $H_0$ given the measured Fermat potential differences $\Delta \phi$. The relationship between the dimensionless the Fermat potential difference and the (true) time delay is

$\Delta t = \frac{D_{\Delta t}}{c} \Delta \phi$

The posterior PDF for $H_0$ given a set of $K$ lenses is

${\rm Pr}(H_0|\{\Delta \phi_k \}) \propto {\rm Pr}(H_0) \prod_{k=1}^{K} {\rm Pr}(\Delta \phi_k|H_0),$

where the likelihood function components are the results of marginalizing over the time delays and time delay distances:

${\rm Pr}(\Delta \phi_k|H_0) = \iint d\Delta t_k dD_{\Delta t} {\rm Pr}(\Delta \phi_k|\Delta t_k, D_{\Delta t}^k) {\rm Pr}(\Delta t_k) \delta(D_{\Delta t}^k - Q(z_d,z_s)/H_0).$

Here, we have used the definition of time delay distance in terms of the function $Q$, which is a constant for each lens if the only variable cosmological parameter is $H_0$:

$D_{\Delta t} = \frac{Q(z_d,z_s)}{H_0}$

The integral over the time delay distance is straightforward, while the integral over the time delay can be approximated as a sum over the $N_{\rm s}$ posterior samples provided by the Good Team:

${\rm Pr}(\Delta \phi_k|H_0) \approx \frac{1}{N_{\rm s}} \sum_{j=1}^{N_{\rm s}} {\rm Pr}(\Delta \phi_k|\Delta t_k^j, Q(z_d^k,z_s^k)/H_0).$

We now have, then, the function that we need to evaluate at each sample time delay $\Delta t_k$:

${\rm Pr}(\Delta \phi_k|\Delta t_k, Q(z_d^k,z_s^k)/H_0) = \frac{1}{\sqrt{(2\pi\sigma_k^2)}} \exp\left[ -\frac{\chi^2}{2} \right]$

where $\chi^2 = \frac{ \left( \Delta \phi_k - \frac{c \Delta t_k H_0}{Q(z_d^k,z_s^k)} \right)^2 }{\sigma_k^2}.$

Inference of $H_0$

To emulate a real analysis, we'll read back in all the data files we just wrote out.


In [5]:
Lets.read_in_time_delay_samples_from('roundtrip*.txt')


Read in 100 lenses, quad fraction = 0.13

In [6]:
Lets.draw_some_prior_samples(Npriorsamples=1000)
Lets.compute_the_joint_log_likelihood()


Wallclock time spent characterizing posterior =  11.0 seconds

In [7]:
Lets.report_the_inferred_cosmological_parameters()


H0 = 73.0 +/- 0.8 km/s/Mpc
True H0 = 72.3 km/s/Mpc

In [8]:
Lets.plot_the_inferred_cosmological_parameters()


Plot saved to H0posterior.pdf

Conclusions

  • The true value of $H_0$ seems to be recovered accurately
  • The calculation is somewhat slow, due to the for loops over the lenses and the prior samples. For 100 lenses, each with 100 posterior samples, it takes about 7 seconds for a reasonable size set of 1000 prior samples. This is OK for analysis runs, but it could be a bit slow for investigating the number of prior samples to draw, etc.

In [9]:
import os, glob
for trash in glob.glob('roundtrip*.txt'):
    os.remove(trash)