This notebook shows you how to find the estimation of a lens time delay from TDC2 light curve data using the PyCS
code. For a detailed tutorial through the PyCS
code, please visit this address.
First, we'll import SLTimer
, as well as a few other important commands.
In [1]:
from __future__ import print_function
import os, urllib, numpy as np
%matplotlib inline
import sys
sys.path.append('../python')
import desc.sltimer
%load_ext autoreload
%autoreload 2
In [2]:
timer = desc.sltimer.SLTimer()
In [3]:
url = "http://www.slac.stanford.edu/~pjm/LSST/DESC/SLTimeDelayChallenge/release/tdc2/gateway/tdc2-gateway-1.txt"
timer.download(url, and_read=True, format='tdc2')
In [4]:
timer.display_light_curves(jdrange=(59500,63100))
The TDC2 light curve data files have headers that contain some lens model information: the Fermat potential differences between the image pairs. These are related to the time delays, by a cosmological distance factor $Q$ and the Hubble constant $H_0$. A broad Gaussian prior on $H_0$ will translate to an approximately Gaussian prior on the time delays. Let's draw from this prior to help initialize the time delays in our model.
In [5]:
# Time delays all set to zero:
# timer.initialize_time_delays(method=None)
# Draw time delays from the prior, using knowledge of H0 and the lens model:
# timer.initialize_time_delays(method='H0_prior', pars=[70.0, 7.0])
# "Guess" the time delays - for testing, let's try something close to the true value:
timer.initialize_time_delays(method='guess', pars={'AB':55.0})
We're now ready to analyze this data. We'll start with it as-is, and then later try "whitening" it.
The following lines will run an entire free-knot spline technique on your data with a complete error analysis using the TDC2
method. Below, you can specify how the time delays will be analyzed. The default is listed below according to the PyCS
tutorial. See the bottom of the page for alternate methods.
In [6]:
timer.estimate_time_delays(method='pycs', microlensing='spline', agn='spline', error=None, quietly=True)
timer.report_time_delays()
Out[6]:
In [7]:
timer.display_light_curves(jdrange=(59500,63100))
While the time delays have been estimated, we can see that the different images' light curves are not shifted and microlensing-corrected terribly well.
In [8]:
# timer.estimate_uncertainties(n=3,npkl=5)
In the above analysis we ignored the fact that the magnitudes were measured in 6 different filters, and just used them all as if they were from the same filter. By offsetting the light curves to a common mean, we should get a set of points that look more like they were taken in one filter. This process is known as "whitening."
In [9]:
wtimer = desc.sltimer.SLTimer()
wtimer.download(url, and_read=True, format='tdc2')
In [10]:
wtimer.whiten()
The change brought about by whitening is pretty subtle: the means of each image's light curve stay the same (by design), but the scatter in each image's light curve is somewhat reduced.
In [11]:
wtimer.display_light_curves(jdrange=(59500,63100))
In [12]:
wtimer.initialize_time_delays(method='guess', pars={'AB':55.0})
wtimer.estimate_time_delays(method='pycs', microlensing='spline', agn='spline', error=None, quietly=True)
In [13]:
wtimer.display_light_curves(jdrange=(59500,63100))
In [14]:
wtimer.report_time_delays()
Out[14]:
In [15]:
timer.report_time_delays()
Out[15]:
In [16]:
truthurl = "http://www.slac.stanford.edu/~pjm/LSST/DESC/SLTimeDelayChallenge/release/tdc2/gateway/gatewaytruth.txt"
truthfile = truthurl.split('/')[-1]
if not os.path.isfile(truthfile):
urllib.urlretrieve(truthurl, truthfile)
d = np.loadtxt(truthfile).transpose()
truth = d[0]
print("True Time Delays:", truth[0])
The above demo shows that:
PyCS
We are currently experimenting with initializing the fit to avoid getting stuck at local maxima, by choosing time delays based on our prior knowledge of a) the lens model and b) the Hubble constant.
In [ ]: