Light Curves with COS


COS employs photon-counting detectors in both the FUV and NUV channels. This time information is used in various steps throughout the standard calibration pipeline, but can also be used to extract information about the time-variablity of the target. This tutorial demonstrates this ability through a open-source package that extracts light curves from COS corrtag files.

Installation


The lightcurve package used in this tutorial is neither in the standard libary, nor packaged with any distributions. If you would like to run the examples shown here you will need to install it on your system.

You can install via pip:

pip install lightcurve

Or via source:

git clone https://github.com/justincely/lightcurve.git
cd lightcurve
python setup.py install

To learn more, you can visit the lightcurve webpage: http://justincely.github.io/lightcurve/


In [11]:
#-- Necessary library imports for the tutorial
import os
import glob
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from astropy.io import fits

#-- import of the lightcurve package
import lightcurve

In [12]:
#-- iPython notebook 'magic' command to make plots appear within the 
#-- browser instead of opening a separate plotting window.
%matplotlib inline

#-- Increase the default fontsize for the plots
mpl.rcParams['font.size'] = 16

In [13]:
#-- timefilter and CalCOS need reference files to function.
#-- this will set the lref environment variable if you don't have it set.
#--
#-- This is the correct location for users internal to STScI, you may have to change it.
os.environ['lref'] = '/grp/hst/cdbs/lref/'

Dispersed Spectrum vs. Light Curve


To jump right in, lets look at the difference between the standard calibrated spectrum and the lightcurve in a particular dataset of the cataclysmic variable IY-UMa. This particular dataset was obtained by PI Boris Gaensicke in program 12870.

Using the lightcurve package


Opening a file with the lightcurve package creates a lightcurve object that contains the extracted data, along with an interface to re-extract on different parameters.

Attributes of the object include:

  • background (extracted background in counts)
  • bins (time bins for each datapoint)
  • counts (raw extracted counts)
  • error (poisson error estimate)
  • flux (flux-calibrated data)
  • flux_error (error estimate on flux)
  • mjd (Modified-Julian Date of each timestep)
  • net ((counts - background)/times)

While some of the methods are:

  • extract (re-extract from the data)
  • write (Write the object to a FITS file)

Open the data with the following. With no optional arguements, the stepsize will automatically be 1 second and all data from the entire wavelength range across both the FUVA and FUVB segments will be used (regardless of which datafile is specified as input).

lc_object = lightcurve.open(filename='rootname_corrtag_a.fits')

And access the methods or attributes:

print lc_object.counts, lc_object.times
lc_object.write('rootname_curve.fits')

Below we will simply open the file with no optional arguements and plot the data.


In [14]:
fig = plt.figure(figsize=(13, 7))
ax1 = fig.add_subplot(2, 1, 1)
ax2 = fig.add_subplot(2, 1, 2)

#-- Open the spectrum and plot
hdu = fits.open('data/lc1va0zgq_x1d.fits')
ax1.plot(hdu[1].data['wavelength'].ravel(), hdu[1].data['flux'].ravel())
ax1.set_ylim(0, 3e-14)
ax1.set_xlim(1000, 2200)
ax1.set_xlabel('Wavelength $(\AA)$')
ax1.set_ylabel('Flux')

#-- Extract the lightcurve and plot
lc = lightcurve.open(filename='data/lc1va0zgq_corrtag_a.fits')
ax2.plot(lc.times, lc.counts, 'o')
ax2.set_ylabel('Counts')
ax2.set_xlabel('Time (s)')

ax1.set_title('Time Variable Spectrum of IY-UMa')
fig.tight_layout()  
fig.savefig('spectrum_vs_lightcurve.pdf', bbox_inches='tight')


Extracting from: ['data/lc1va0zgq_corrtag_a.fits']

Optional parameters


Two of the most common parameters to change are the stepsize and the wavelength coverage. Stepsize is useful to increase signal to noise or find features on a particular timescale, while wavelength coverage can be used to find variation trends with wavelength or cut-out known bad data from airglow lines.

You can specify the optional parameters either in intial opening or subsequent re-extraction. Note that the filename does not need to be specified in re-extraction.

lc_object = lightcurve.open(filename='rootname_corrtag_a.fits', step=5, wlim=(1300, 1400))

lc_object.extract(step=10, wlim=(1350, 1450))

Below we will display the 1d extracted spectra from 4 sequential datasets of IY-UMa, and then extract and plot lightcurves as well with two sets of parameters


In [15]:
fig = plt.figure(figsize=(15, 5))
ax = fig.add_subplot(1, 1, 1)

datasets = glob.glob('data/lc1v?????_x1d.fits')
datasets.sort()

for item in datasets:
    hdu = fits.open(item)
    ax.plot(hdu[1].data['wavelength'].ravel(), hdu[1].data['flux'].ravel())
    
ax.set_ylim(0, 3e-14)
ax.set_xlim(1000, 2200)
ax.set_xlabel('Wavelength')
ax.set_ylabel('Flux')
ax.set_title('Dispersed spectra from 4 sequential datasets')


Out[15]:
<matplotlib.text.Text at 0x106b339d0>

Below, the set of extractions displayed in the top panel is using data across the entire wavelength range (with the exception of LyA, which is excluded by default due to continuus airglow contamination)

The extracions in the bottom panel instead restrict the extraction to only the 1600-1800 Angstrom range. Thereby excluding the airglow from 1302-1306, and the very low signal-to-noise data past 1800. We can see that this had a considerable effect in a few of the datasets which indicates some airglow contamination had been present.


In [16]:
fig, (ax1, ax2) = plt.subplots(2, figsize=(15, 7), sharex=True)

datasets = glob.glob('data/lc1v?????_corrtag_a*')
datasets.sort()

for item in datasets:
    #-- Minimal wavelength screening
    lc = lightcurve.open(filename=item, step=10)
    ax1.plot(lc.mjd, lc.counts, 'o')
    
    #-- Specifically an area with no airglow
    lc.extract(wlim=(1600, 1800), step=10)
    ax2.plot(lc.mjd, lc.counts, 'o')
    
ax1.set_xlabel('MJD')
ax1.set_ylabel('Counts')

ax2.set_xlabel('MJD')
ax2.set_ylabel('Counts')

ax1.set_title('Lightcurves with different wavelength ranges')
    
fig.subplots_adjust(hspace=0)
fig.savefig('eclipse.pdf', bbox_inches='tight')


Extracting from: ['data/lc1va0yeq_corrtag_a.fits']
Extracting from: ['data/lc1va0yjq_corrtag_a.fits']
Extracting from: ['data/lc1va0zcq_corrtag_a.fits']
Extracting from: ['data/lc1va0zgq_corrtag_a.fits']

Here again we vary a few parameters just for the one dataset to demonstrate the effect of changing stepsize and wavelength coverage.


In [17]:
dataset = 'data/lc1va0zgq_corrtag_a.fits'

fig, (ax1, ax2, ax3) = plt.subplots(3, figsize=(10,15), sharex=True)

lc = lightcurve.open(filename=dataset, step=1)
ax1.plot(lc.times, lc.counts, 'o', label='Stepsize=1s')
ax1.legend(shadow=True, numpoints=1, loc='best', fontsize=14)
ax1.set_ylabel('Counts')


#-- Re-extract with different stepsize
lc.extract(step=15)
ax2.plot(lc.times, lc.counts, 'o', label='Stepsize=15s')   
ax2.legend(shadow=True, numpoints=1, loc='best', fontsize=14)
ax2.set_ylabel('Counts')

#-- A select wavelength or two   
lc.extract(step=15, wlim=(1400, 1500))
ax3.plot(lc.times, lc.counts, 'o', label='($1400\AA - 1500\AA)$')

lc.extract(step=15, wlim=(1500, 1600))
ax3.plot(lc.times, lc.counts, 'o', label='($1500\AA - 1600\AA)$')

ax3.legend(shadow=True, numpoints=1, loc='best', fontsize=14)
ax3.set_ylabel('Counts')
ax3.set_xlabel('Time (s)')

ax1.set_title('Varying Parameters')
fig.subplots_adjust(hspace=0)
fig.savefig('eclipse_things.pdf', bbox_inches='tight')


Extracting from: ['data/lc1va0zgq_corrtag_a.fits']

Writing data to file


When you are happy with your extraion, you can write it out to FITS using the built-in write method, which allows you to easily save and use the data with other tasks.


In [18]:
lc.write('data/optimal_lightcurve.fits')

In [19]:
hdu = fits.open('data/optimal_lightcurve.fits')
hdu.info()


Filename: data/optimal_lightcurve.fits
No.    Name         Type      Cards   Dimensions   Format
0    PRIMARY     PrimaryHDU     170   ()              
1                BinTableHDU     38   89R x 10C    [D, D, D, D, D, D, D, D, D, D]   

In [19]: