In [1]:
from __future__ import print_function, division, absolute_import

Challenges of Streaming Data:

Building an ANTARES-like Pipeline for Data Management and Discovery

Version 0.1


By AA Miller 2017 Apr 10

Edited by Gautham Narayan, 2017 Apr 26

As we just saw in Gautham's lecture - LSST will produce an unprecedented volume of time-domain information for the astronomical sky. $>37$ trillion individual photometric measurements will be recorded. While the vast, vast majority of these measurements will simply confirm the status quo, some will represent rarities that have never been seen before (e.g., LSST may be the first telescope to discover the electromagnetic counterpart to a LIGO graviational wave event), which the community will need to know about in ~real time.

Storing, filtering, and serving this data is going to be a huge nightmare challenge. ANTARES, as detailed by Gautham, is one proposed solution to this challenge. In this exercise you will build a miniature version of ANTARES, which will require the application of several of the lessons from earlier this week. Many of the difficult, and essential, steps necessary for ANTARES will be skipped here as they are too time consuming or beyond the scope of what we have previously covered. We will point out these challenges are we come across them.


In [2]:
import numpy as np
import scipy.stats as spstat
import pandas as pd
import matplotlib.pyplot as plt

%matplotlib notebook

Problem 1) Light Curve Data

We begin by ignoring the streaming aspect of the problem (we will come back to that later) and instead we will work with full light curves. The collection of light curves has been curated by Gautham and like LSST it features objects of different types covering a large range in brightness and observations in multiple filters taken at different cadences.

As the focus of this exercise is the construction of a data management pipeline, we have already created a Python class to read in the data and store light curves as objects. The data are stored in flat text files with the following format:

t pb flux dflux
56254.160000 i 6.530000 4.920000
56254.172000 z 4.113000 4.018000
56258.125000 g 5.077000 10.620000
56258.141000 r 6.963000 5.060000
. . . .
. . . .
. . . .

and names FAKE0XX.dat where the XX is a running index from 01 to 99.

Problem 1a

Read in the data for the first light curve file and plot the $g'$ light curve for that source.


In [3]:
# execute this cell

# XXX note - figure out how data handling will work for this file

lc = pd.read_csv('training_set_for_LSST_DSFP/FAKE001.dat', delim_whitespace=True, comment = '#')

plt.errorbar(np.array(lc['t'].ix[lc['pb'] == 'g']), 
             np.array(lc['flux'].ix[lc['pb'] == 'g']), 
             np.array(lc['dflux'].ix[lc['pb'] == 'g']), fmt = 'o', color = 'green')
plt.xlabel('MJD')
plt.ylabel('flux')


Out[3]:
<matplotlib.text.Text at 0x115339c90>

As we have many light curve files (in principle as many as 37 billion...), we will define a light curve class to ease our handling of the data.

Problem 1b

Fix the lc class definition below.

Hint - the only purpose of this problem is to make sure you actually read each line of code below, it is not intended to be difficult.


In [4]:
class ANTARESlc():
    '''Light curve object for NOAO formatted data'''
    
    def __init__(self, filename):
        '''Read in light curve data'''
        DFlc = pd.read_csv(filename, delim_whitespace=True, comment = '#')
        self.DFlc = DFlc
        self.filename = filename
        
    def plot_multicolor_lc(self):
        '''Plot the 4 band light curve'''
        fig, ax = plt.subplots()
        g = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'g'], 
                     self.DFlc['flux'].ix[self.DFlc['pb'] == 'g'],
                     self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g'],
                     fmt = 'o', color = '#78A5A3', label = r"$g'$")
        r = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'r'], 
                     self.DFlc['flux'].ix[self.DFlc['pb'] == 'r'],
                     self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r'],
                     fmt = 'o', color = '#CE5A57', label = r"$r'$")
        i = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'i'], 
                     self.DFlc['flux'].ix[self.DFlc['pb'] == 'i'],
                     self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i'],
                     fmt = 'o', color = '#E1B16A', label = r"$i'$")
        z = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'z'], 
                     self.DFlc['flux'].ix[self.DFlc['pb'] == 'z'],
                     self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z'],
                     fmt = 'o', color = '#444C5C', label = r"$z'$")
        ax.legend(fancybox = True)
        ax.set_xlabel(r"$\mathrm{MJD}$")
        ax.set_ylabel(r"$\mathrm{flux}$")

Problem 1c

Confirm the corrections made in 1b by plotting the multiband light curve for the source FAKE010.


In [5]:
lc = ANTARESlc('training_set_for_LSST_DSFP/FAKE010.dat')

lc.plot_multicolor_lc()


One thing that we brushed over previously is that the brightness measurements have units of flux, rather than the traditional use of magnitudes. The reason for this is that LSST will measure flux variations via image differencing, which will for some sources in some filters result in a measurement of negative flux. (You may have already noticed this in 1a.) Statistically there is nothing wrong with such a measurement, but it is impossible to convert a negative flux into a magnitude. Thus we will use flux measurements throughout this exercise. [Aside - if you are bored during the next break, I'd be happy to rant about why we should have ditched the magnitude system years ago.]

Using flux measurements will allow us to make unbiased measurements of the statistical distributions of the variations of the sources we care about.

Problem 1d

What is FAKE010 the source that is plotted above?

Hint 1 - if you have no idea that's fine, move on.

Hint 2 - ask Szymon or Tomas...

Solution 1d

FAKE010 is a transient, as can be seen by the rapid rise followed by a gradual decline in the light curve. In this particular case, we can further guess that FAKE010 is a Type Ia supernova due to the secondary maxima in the $i'$ and $z'$ light curves. These secondary peaks are not present in any other known type of transient.

Problem 1e

To get a better sense of the data, plot the multiband light curves for sources FAKE060 and FAKE073.


In [6]:
lc59 = ANTARESlc("training_set_for_LSST_DSFP/FAKE060.dat")
lc59.plot_multicolor_lc()

lc60 = ANTARESlc("training_set_for_LSST_DSFP/FAKE073.dat")
lc60.plot_multicolor_lc()


Problem 2) Data Preparation

While we could create a database table that includes every single photometric measurement made by LSST, this ~37 trillion row db would be enormous without providing a lot of added value beyond the raw flux measurements [while this table is necessary, alternative tables may provide more useful]. Furthermore, extracting individual light curves from such a database will be slow. Instead, we are going to develop summary statistics for every source which will make it easier to select individual sources and develop classifiers to identify objects of interest.

Below we will redefine the ANTARESlc class to include additional methods so we can (eventually) store summary statistics in a database table. In the interest of time, we limit the summary statistics to a relatively small list all of which have been shown to be useful for classification (see Richards et al. 2011 for further details). The statistics that we include (for now) are:

  1. Std -- the standard deviation of the flux measurements
  2. Amp -- the amplitude of flux deviations
  3. MAD -- the median absolute deviation of the flux measurements
  4. beyond1std -- the fraction of flux measurements beyond 1 standard deviation
  5. the mean $g' - r'$, $r' - i'$, and $i' - z'$ color

Problem 2a

Complete the mean color module in the ANTARESlc class. Feel free to use the other modules as a template for your work.

Hint/food for thought - if a source is observed in different filters but the observations are not simultaneous (or quasi-simultaneous), what is the meaning of a "mean color"?

Solution to food for thought - in this case we simply want you to take the mean flux in each filter and create a statistic that is $-2.5 \log \frac{\langle f_X \rangle}{\langle f_{Y} \rangle}$, where ${\langle f_{Y} \rangle}$ is the mean flux in band $Y$, while $\langle f_X \rangle$ is the mean flux in band $X$, which can be $g', r', i', z'$. Note that our use of image-difference flux measurements, which can be negative, means you'll need to add some form a case excpetion if $\langle f_X \rangle$ or $\langle f_Y \rangle$ is negative. In these cases set the color to -999.


In [7]:
class ANTARESlc():
    '''Light curve object for NOAO formatted data'''
    
    def __init__(self, filename):
        '''Read in light curve data'''
        DFlc = pd.read_csv(filename, delim_whitespace=True, comment = '#')
        self.DFlc = DFlc
        self.filename = filename
        
    def plot_multicolor_lc(self):
        '''Plot the 4 band light curve'''
        fig, ax = plt.subplots()
        g = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'g'], 
                     self.DFlc['flux'].ix[self.DFlc['pb'] == 'g'],
                     self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g'],
                     fmt = 'o', color = '#78A5A3', label = r"$g'$")
        r = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'r'], 
                     self.DFlc['flux'].ix[self.DFlc['pb'] == 'r'],
                     self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r'],
                     fmt = 'o', color = '#CE5A57', label = r"$r'$")
        i = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'i'], 
                     self.DFlc['flux'].ix[self.DFlc['pb'] == 'i'],
                     self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i'],
                     fmt = 'o', color = '#E1B16A', label = r"$i'$")
        z = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'z'], 
                     self.DFlc['flux'].ix[self.DFlc['pb'] == 'z'],
                     self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z'],
                     fmt = 'o', color = '#444C5C', label = r"$z'$")
        ax.legend(fancybox = True)
        ax.set_xlabel(r"$\mathrm{MJD}$")
        ax.set_ylabel(r"$\mathrm{flux}$")
        
    def filter_flux(self):
        '''Store individual passband fluxes as object attributes'''
        
        self.gFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'g']
        self.gFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g']

        self.rFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'r']
        self.rFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r']

        self.iFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'i']
        self.iFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i']

        self.zFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'z']
        self.zFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z']

    def weighted_mean_flux(self):
        '''Measure (SNR weighted) mean flux in griz'''

        if not hasattr(self, 'gFlux'):
            self.filter_flux()
            
        weighted_mean = lambda flux, dflux: np.sum(flux*(flux/dflux)**2)/np.sum((flux/dflux)**2)
        
        self.gMean = weighted_mean(self.gFlux, self.gFluxUnc)
        self.rMean = weighted_mean(self.rFlux, self.rFluxUnc)
        self.iMean = weighted_mean(self.iFlux, self.iFluxUnc)
        self.zMean = weighted_mean(self.zFlux, self.zFluxUnc)

    def normalized_flux_std(self):
        '''Measure standard deviation of flux in griz'''
        
        if not hasattr(self, 'gFlux'):
            self.filter_flux()

        if not hasattr(self, 'gMean'):
            self.weighted_mean_flux()
        
        normalized_flux_std = lambda flux, wMeanFlux: np.std(flux/wMeanFlux, ddof = 1) 
        
        self.gStd = normalized_flux_std(self.gFlux, self.gMean)
        self.rStd = normalized_flux_std(self.rFlux, self.rMean)
        self.iStd = normalized_flux_std(self.iFlux, self.iMean)
        self.zStd = normalized_flux_std(self.zFlux, self.zMean)

    def normalized_amplitude(self):
        '''Measure the normalized amplitude of variations in griz'''

        if not hasattr(self, 'gFlux'):
            self.filter_flux()

        if not hasattr(self, 'gMean'):
            self.weighted_mean_flux()

        normalized_amplitude = lambda flux, wMeanFlux: (np.max(flux) - np.min(flux))/wMeanFlux
        
        self.gAmp = normalized_amplitude(self.gFlux, self.gMean)
        self.rAmp = normalized_amplitude(self.rFlux, self.rMean)
        self.iAmp = normalized_amplitude(self.iFlux, self.iMean)
        self.zAmp = normalized_amplitude(self.zFlux, self.zMean)

    def normalized_MAD(self):
        '''Measure normalized Median Absolute Deviation (MAD) in griz'''

        if not hasattr(self, 'gFlux'):
            self.filter_flux()

        if not hasattr(self, 'gMean'):
            self.weighted_mean_flux()

        normalized_MAD = lambda flux, wMeanFlux: np.median(np.abs((flux - np.median(flux))/wMeanFlux))
        
        self.gMAD = normalized_MAD(self.gFlux, self.gMean)
        self.rMAD = normalized_MAD(self.rFlux, self.rMean)
        self.iMAD = normalized_MAD(self.iFlux, self.iMean)
        self.zMAD = normalized_MAD(self.zFlux, self.zMean)

    def normalized_beyond_1std(self):
        '''Measure fraction of flux measurements beyond 1 std'''

        if not hasattr(self, 'gFlux'):
            self.filter_flux()

        if not hasattr(self, 'gMean'):
            self.weighted_mean_flux()
        
        beyond_1std = lambda flux, wMeanFlux: sum(np.abs(flux - wMeanFlux) > np.std(flux, ddof = 1))/len(flux)
        
        self.gBeyond = beyond_1std(self.gFlux, self.gMean)
        self.rBeyond = beyond_1std(self.rFlux, self.rMean)
        self.iBeyond = beyond_1std(self.iFlux, self.iMean)
        self.zBeyond = beyond_1std(self.zFlux, self.zMean)
    
    def skew(self):
        '''Measure the skew of the flux measurements'''
        if not hasattr(self, 'gFlux'):
            self.filter_flux()
            
        skew = lambda flux: spstat.skew(flux) 
        
        self.gSkew = skew(self.gFlux)
        self.rSkew = skew(self.rFlux)
        self.iSkew = skew(self.iFlux)
        self.zSkew = skew(self.zFlux)
        
    def mean_colors(self):
        '''Measure the mean g-r, g-i, and g-z colors'''
        
        if not hasattr(self, 'gFlux'):
            self.filter_flux()

        if not hasattr(self, 'gMean'):
            self.weighted_mean_flux()
        
        self.gMinusR = -2.5*np.log10(self.gMean/self.rMean) if self.gMean> 0 and self.rMean > 0 else -999
        self.rMinusI = -2.5*np.log10(self.rMean/self.iMean) if self.rMean> 0 and self.iMean > 0 else -999
        self.iMinusZ = -2.5*np.log10(self.iMean/self.zMean) if self.iMean> 0 and self.zMean > 0 else -999

Problem 2b

Confirm your solution to 2a by measuring the mean colors of source FAKE010. Does your measurement make sense given the plot you made in 1c?


In [8]:
lc = ANTARESlc('training_set_for_LSST_DSFP/FAKE010.dat')

lc.filter_flux()
lc.weighted_mean_flux()
lc.mean_colors()

print("The g'-r', r'-i', and 'i-z' colors are: {:.3f}, {:.3f}, and {:.3f}, respectively.". format(lc.gMinusR, lc.rMinusI, lc.iMinusZ))


The g'-r', r'-i', and 'i-z' colors are: 1.048, -0.191, and -0.193, respectively.

Problem 3) Store the sources in a database

Building (and managing) a database from scratch is a challenging task. For (very) small projects one solution to this problem is to use SQLite, which is a self-contained, publicly available SQL engine. One of the primary advantages of SQLite is that no server setup is required, unlike other popular tools such as postgres and MySQL. In fact, SQLite is already integrated with python so everything we want to do (create database, add tables, load data, write queries, etc.) can be done within Python.

Without diving too deep into the details, here are situations where SQLite has advantages and disadvantages according to their own documentation:

Advantages

  1. Situations where expert human support is not needed
  2. For basic data analysis (SQLite is easy to install and manage for new projects)
  3. Education and training

Disadvantages

  1. Client/Server applications (SQLite does not behave well if multiple systems need to access db at the same time)
  2. Very large data sets (SQLite stores entire db in a single disk file, other solutions can store data across multiple files/volumes)
  3. High concurrency (Only 1 writer allowed at a time for SQLite)

From the (limited) lists above, you can see that while SQLite is perfect for our application right now, if you were building an actual ANTARES-like system a more sophisticated database solution would be required.

Problem 3a

Import sqlite3 into the notebook.

Hint - if this doesn't work, you may need to conda install sqlite3 or pip install sqlite3.


In [9]:
import sqlite3

Following the sqlite3 import, we must first connect to the database. If we attempt a connection to a database that does not exist, then a new database is created. Here we will create a new database file, called miniANTARES.db.


In [10]:
conn = sqlite3.connect("miniANTARES.db")

We now have a database connection object, conn. To interact with the database (create tables, load data, write queries) we need a cursor object.


In [11]:
cur = conn.cursor()

Now that we have a cursor object, we can populate the database. As an example we will start by creating a table to hold all the raw photometry (though ultimately we will not use this table for analysis).

Note - there are many cursor methods capable of interacting with the database. The most common, execute, takes a single SQL command as its argument and executes that command. Other useful methods include executemany, which is useful for inserting data into the database, and executescript, which take an SQL script as its argument and executes the script.

In many cases, as below, it will be useful to use triple quotes in order to improve the legibility of your code.


In [12]:
cur.execute("""drop table if exists rawPhot""") # drop the table if is already exists
cur.execute("""create table rawPhot(
                id integer primary key,
                objId int,
                t float, 
                pb varchar(1),
                flux float,
                dflux float) 
""")


Out[12]:
<sqlite3.Cursor at 0x112cf7d50>

Let's unpack everything that happened in these two commands. First - if the table rawPhot already exists, we drop it to start over from scratch. (this is useful here, but should not be adopted as general practice)

Second - we create the new table rawPhot, which has 6 columns: id - a running index for every row in the table, objId - an ID to identify which source the row belongs to, t - the time of observation in MJD, pb - the passband of the observation, flux the observation flux, and dflux the uncertainty on the flux measurement. In addition to naming the columns, we also must declare their type. We have declared id as the primary key, which means this value will automatically be assigned and incremented for all data inserted into the database. We have also declared pb as a variable character of length 1, which is more useful and restrictive than simply declaring pb as text, which allows any freeform string.

Now we need to insert the raw flux measurements into the database. To do so, we will use the ANTARESlc class that we created earlier. As an initial example, we will insert the first 3 observations from the source FAKE010.


In [13]:
filename = "training_set_for_LSST_DSFP/FAKE001.dat"
lc = ANTARESlc(filename)

objId = int(filename.split('FAKE')[1].split(".dat")[0])

cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(lc.DFlc.ix[0])))
cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(lc.DFlc.ix[1])))
cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(lc.DFlc.ix[2])))


Out[13]:
<sqlite3.Cursor at 0x112cf7d50>

There are two things to highlight above: (1) we do not specify an id for the data as this is automatically generated, and (2) the data insertion happens via a tuple. In this case, we are taking advantage of the fact that a Python tuple is can be concatenated:

(objId,) + tuple(lc10.DFlc.ix[0]))

While the above example demonstrates the insertion of a single row to the database, it is far more efficient to bulk load the data. To do so we will delete, i.e. DROP, the rawPhot table and use some pandas manipulation to load the contents of an entire file at once via executemany.


In [14]:
cur.execute("""drop table if exists rawPhot""") # drop the table if it already exists
cur.execute("""create table rawPhot(
                id integer primary key,
                objId int,
                t float, 
                pb varchar(1),
                flux float,
                dflux float) 
""")

# next 3 lines are already in name space; repeated for clarity
filename = "training_set_for_LSST_DSFP/FAKE001.dat"
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])

data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples

cur.executemany("""insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)


Out[14]:
<sqlite3.Cursor at 0x112cf7d50>

Problem 3b

Load all of the raw photometric observations into the rawPhot table in the database.

Hint - you can use glob to select all of the files being loaded.

Hint 2 - you have already loaded the data from FAKE001 into the table.


In [15]:
import glob

filenames = glob.glob("training_set_for_LSST_DSFP/FAKE*.dat")

for filename in filenames[1:]: 
    lc = ANTARESlc(filename)
    objId = int(filename.split('FAKE')[1].split(".dat")[0])

    data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples

    cur.executemany("""insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)

Problem 3c

To ensure the data have been loaded properly, select the $r'$ light curve for source FAKE010 from the rawPhot table and plot the results. Does it match the plot from 1c?


In [16]:
cur.execute("""select t, flux, dflux 
               from rawPhot
               where objId = 61 and pb = 'g'""")

data = cur.fetchall()
data = np.array(data)

fig, ax = plt.subplots()
ax.errorbar(data[:,0], data[:,1], data[:,2], fmt = 'o', color = '#78A5A3')
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")


Out[16]:
<matplotlib.text.Text at 0x11569dfd0>

Now that we have loaded the raw observations, we need to create a new table to store summary statistics for each object. This table will include everything we've added to the ANTARESlc class.


In [17]:
cur.execute("""drop table if exists lcFeats""") # drop the table if it already exists
cur.execute("""create table lcFeats(
                id integer primary key,
                objId int,
                gStd float,
                rStd float,
                iStd float,
                zStd float,
                gAmp float, 
                rAmp float, 
                iAmp float, 
                zAmp float, 
                gMAD float,
                rMAD float,
                iMAD float,                
                zMAD float,                
                gBeyond float,
                rBeyond float,
                iBeyond float,
                zBeyond float,
                gSkew float,
                rSkew float,
                iSkew float,
                zSkew float,
                gMinusR float,
                rMinusI float,
                iMinusZ float,
                FOREIGN KEY(objId) REFERENCES rawPhot(objId)
                ) 
""")


Out[17]:
<sqlite3.Cursor at 0x112cf7d50>

The above procedure should look familiar to above, with one exception: the addition of the foreign key in the lcFeats table. The inclusion of the foreign key ensures a connected relationship between rawPhot and lcFeats. In brief, a row cannot be inserted into lcFeats unless a corresponding row, i.e. objId, exists in rawPhot. Additionally, rows in rawPhot cannot be deleted if there are dependent rows in lcFeats.

Problem 3d

Calculate features for every source in rawPhot and insert those features into the lcFeats table.


In [18]:
for filename in filenames:
    lc = ANTARESlc(filename)
    objId = int(filename.split('FAKE')[1].split(".dat")[0])

    lc.filter_flux()
    lc.weighted_mean_flux()
    lc.normalized_flux_std()
    lc.normalized_amplitude()
    lc.normalized_MAD()
    lc.normalized_beyond_1std()
    lc.skew()
    lc.mean_colors()
    
    feats = (objId, lc.gStd, lc.rStd, lc.iStd, lc.zStd, 
            lc.gAmp,  lc.rAmp,  lc.iAmp,  lc.zAmp,  
            lc.gMAD, lc.rMAD, lc.iMAD, lc.zMAD, 
            lc.gBeyond, lc.rBeyond, lc.iBeyond, lc.zBeyond,
            lc.gSkew, lc.rSkew, lc.iSkew, lc.zSkew, 
            lc.gMinusR, lc.rMinusI, lc.iMinusZ)

    cur.execute("""insert into lcFeats(objId, 
                                       gStd, rStd, iStd, zStd, 
                                       gAmp,  rAmp,  iAmp,  zAmp,  
                                       gMAD, rMAD, iMAD, zMAD, 
                                       gBeyond, rBeyond, iBeyond, zBeyond,
                                       gSkew, rSkew, iSkew, zSkew,
                                       gMinusR, rMinusI, iMinusZ) values {}""".format(feats))

Problem 3e

Confirm that the data loaded correctly by counting the number of sources with gAmp > 2.

How many sources have gMinusR = -999?

Hint - you should find 9 and 2, respectively.


In [19]:
cur.execute("""select count(*) from lcFeats where gAmp > 2""")

nAmp2 = cur.fetchone()[0]

cur.execute("""select count(*) from lcFeats where gMinusR = -999""")
nNoColor = cur.fetchone()[0]

print("There are {:d} sources with gAmp > 2".format(nAmp2))
print("There are {:d} sources with no measured i' - z' color".format(nNoColor))


There are 9 sources with gAmp > 2
There are 2 sources with no measured i' - z' color

Finally, we close by commiting the changes we made to the database.

Note that strictly speaking this is not needed, however, were we to update any values in the database then we would need to commit those changes.


In [20]:
conn.commit()

mini Challenge Problem

If there is less than 45 min to go, please skip this part.

Earlier it was claimed that bulk loading the data is faster than loading it line by line. For this problem - prove this assertion, use %%timeit to "profile" the two different options (bulk load with executemany and loading one photometric measurement at a time via for loop).

Hint - to avoid corruption of your current working database, miniANTARES.db, create a new temporary database for the pupose of running this test. Also be careful with the names of your connection and cursor variables.


In [21]:
%%timeit
# bulk load solution

tmp_conn = sqlite3.connect("tmp1.db")
tmp_cur = tmp_conn.cursor()

tmp_cur.execute("""drop table if exists rawPhot""") # drop the table if it already exists
tmp_cur.execute("""create table rawPhot(
                id integer primary key,
                objId int,
                t float, 
                pb varchar(1),
                flux float,
                dflux float) 
                """)

for filename in filenames: 
    lc = ANTARESlc(filename)
    objId = int(filename.split('FAKE')[1].split(".dat")[0])

    data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples

    tmp_cur.executemany("""insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)


1 loop, best of 3: 301 ms per loop

In [22]:
%%timeit
# bulk load solution

tmp_conn = sqlite3.connect("tmp1.db")
tmp_cur = tmp_conn.cursor()

tmp_cur.execute("""drop table if exists rawPhot""") # drop the table if it already exists
tmp_cur.execute("""create table rawPhot(
                id integer primary key,
                objId int,
                t float, 
                pb varchar(1),
                flux float,
                dflux float) 
                """)

for filename in filenames: 
    lc = ANTARESlc(filename)
    objId = int(filename.split('FAKE')[1].split(".dat")[0])

    for obs in lc.DFlc.values:
        tmp_cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(obs)))


1 loop, best of 3: 1.18 s per loop

Problem 4) Build a Classification Model

One of the primary goals for ANTARES is to separate the Wheat from the Chaff, in other words, given that ~10 million alerts will be issued by LSST on a nightly basis, what is the single (or 10, or 100) most interesting alert.

Here we will build on the skills developed during the DSFP Session 2 to construct a machine-learning model to classify new light curves.

Fortunately - the data that has already been loaded to miniANTARES.db is a suitable training set for the classifier (we simply haven't provided you with labels just yet). Execute the cell below to add a new table to the database which includes the appropriate labels.


In [23]:
cur.execute("""drop table if exists lcLabels""") # drop the table if it already exists
cur.execute("""create table lcLabels(
               objId int,
               label int, 
               foreign key(objId) references rawPhot(objId)
               )""")

labels = np.zeros(100)
labels[20:60] = 1
labels[60:] = 2

data = np.append(np.arange(1,101)[np.newaxis].T, labels[np.newaxis].T, axis = 1)
tup_data = [tuple(x) for x in data]

cur.executemany("""insert into lcLabels(objId, label) values (?,?)""", tup_data)


Out[23]:
<sqlite3.Cursor at 0x112cf7d50>

For now - don't worry about what the labels mean (though if you inspect the light curves you may be able to figure this out...)

Problem 4a

Query the database to select features and labels for the light curves in your training set. Store the results of these queries in numpy arrays, X and y, respectively, which are suitable for the various scikit-learn machine learning algorithms.

Hint - recall that databases do not store ordered results.

Hint 2 - recall that scikit-learn expects y to be a 1d array. You will likely need to convert a 2d array to 1d.


In [24]:
cur.execute("""select label
               from lcLabels 
               order by objId asc""")
y = np.array(cur.fetchall()).ravel()

cur.execute("""select gStd, rStd, iStd, zStd, 
                      gAmp,  rAmp,  iAmp,  zAmp,  
                      gMAD, rMAD, iMAD, zMAD, 
                      gBeyond, rBeyond, iBeyond, zBeyond,
                      gSkew, rSkew, iSkew, zSkew,
                      gMinusR, rMinusI, iMinusZ
               from lcFeats
               order by objId asc""")
X = np.array(cur.fetchall())

Problem 4b

Train a SVM model (SVC in scikit-learn) using a radial basis function (RBF) kernel with penalty parameter, $C = 1$, and kernel coefficient, $\gamma = 0.1$.

Evaluate the accuracy of the model via $k = 5$ fold cross validation.

Hint - you may find the cross_val_score module helpful.


In [25]:
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score

cv_scores = cross_val_score(SVC(C = 1.0, gamma = 0.1, kernel = 'rbf'), X, y, cv = 5)

print("The SVM model produces a CV accuracy of {:.4f}".format(np.mean(cv_scores)))


The SVM model produces a CV accuracy of 0.8600
/Users/gnarayan/anaconda2/envs/astroconda/lib/python2.7/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
  "This module will be removed in 0.20.", DeprecationWarning)

The SVM model does a decent job of classifying the data. However - we are going to have 10 million alerts every night. Therefore, we need something that runs quickly. For most ML models the training step is slow, while predictions (relatively) are fast.

Problem 4c

Pick any other classification model from scikit-learn, and "profile" the time it takes to train that model vs. the time it takes to train an SVM model.

Is the model that you have selected faster than SVM?

Hint - you should import the model outside your timing loop as we only care about the training step in this case.


In [26]:
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier()
svm_clf = SVC(C = 1.0, gamma = 0.1, kernel = 'rbf')

In [27]:
%%timeit
# timing solution for RF model
rf_clf.fit(X,y)


10 loops, best of 3: 40.2 ms per loop

In [28]:
%%timeit
# timing solution for SVM model
svm_clf.fit(X,y)


1000 loops, best of 3: 953 µs per loop

Problem 4d

Does the model you selected perform better than the SVM model? Perform a $k = 5$ fold cross validation to determine which model provides superior accuracy.


In [29]:
cv_scores = cross_val_score(RandomForestClassifier(), X, y, cv = 5)

print("The RF model produces a CV accuracy of {:.4f}".format(np.mean(cv_scores)))


The RF model produces a CV accuracy of 0.8600

Problem 4e

Which model are you going to use in your miniANTARES? Justify your answer.

Write solution to 4e here

In this case we are going to adopt the SVM model as it is a factor of 20 times faster than RF, while providing nearly identical performance from an accuracy stand point.

Problem 5) Class Predictions for New Sources

Now that we have developed a basic infrastructure for dealing with streaming data, we may reap the rewards of our efforts. We will use our ANTARES-like software to classify newly observed sources.

Problem 5a

Load the light curves for the new observations (found in full_testset_for_LSST_DSP) into the a table in the database.

Hint - ultimately it doesn't matter much one way or another, but you may choose to keep new observations in a table separate from the training data. I'm putting it into a new testPhot database. Up to you.


In [30]:
cur.execute("""drop table if exists testPhot""") # drop the table if is already exists
cur.execute("""create table testPhot(
                id integer primary key,
                objId int,
                t float, 
                pb varchar(1),
                flux float,
                dflux float) 
""")
cur.execute("""drop table if exists testFeats""") # drop the table if it already exists
cur.execute("""create table testFeats(
                id integer primary key,
                objId int,
                gStd float,
                rStd float,
                iStd float,
                zStd float,
                gAmp float, 
                rAmp float, 
                iAmp float, 
                zAmp float, 
                gMAD float,
                rMAD float,
                iMAD float,                
                zMAD float,                
                gBeyond float,
                rBeyond float,
                iBeyond float,
                zBeyond float,
                gSkew float,
                rSkew float,
                iSkew float,
                zSkew float,
                gMinusR float,
                rMinusI float,
                iMinusZ float,
                FOREIGN KEY(objId) REFERENCES testPhot(objId)
                ) 
""")


Out[30]:
<sqlite3.Cursor at 0x112cf7d50>

In [31]:
new_obs_filenames = glob.glob("test_set_for_LSST_DSFP/FAKE*.dat")

for filename in new_obs_filenames: 
    lc = ANTARESlc(filename)
    objId = int(filename.split('FAKE')[1].split(".dat")[0])

    data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples

    cur.executemany("""insert into testPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)

Problem 5b

Calculate features for the new observations and insert those features into a database table.

Hint - again, you may want to create a new table for this, up to you. I'm using the testFeats table.


In [32]:
for filename in new_obs_filenames:
    lc = ANTARESlc(filename)
    
    # simple HACK to get rid of data with too few observations (fails because std is nan with just one observation)
    if len(lc.DFlc) <= 14:
        continue
        
    objId = int(filename.split('FAKE')[1].split(".dat")[0])

    lc.filter_flux()
    lc.weighted_mean_flux()
    lc.normalized_flux_std()
    lc.normalized_amplitude()
    lc.normalized_MAD()
    lc.normalized_beyond_1std()
    lc.skew()
    lc.mean_colors()
    
    feats = (objId, lc.gStd, lc.rStd, lc.iStd, lc.zStd, 
            lc.gAmp,  lc.rAmp,  lc.iAmp,  lc.zAmp,  
            lc.gMAD, lc.rMAD, lc.iMAD, lc.zMAD, 
            lc.gBeyond, lc.rBeyond, lc.iBeyond, lc.zBeyond,
            lc.gSkew, lc.rSkew, lc.iSkew, lc.zSkew,
            lc.gMinusR, lc.rMinusI, lc.iMinusZ)

    cur.execute("""insert into testFeats(objId, 
                                       gStd, rStd, iStd, zStd, 
                                       gAmp,  rAmp,  iAmp,  zAmp,  
                                       gMAD, rMAD, iMAD, zMAD, 
                                       gBeyond, rBeyond, iBeyond, zBeyond,
                                       gSkew, rSkew, iSkew, zSkew,
                                       gMinusR, rMinusI, iMinusZ) values {}""".format(feats))

Problem 5c

Train the model that you adopted in 4e on the training set, and produce predictions for the newly observed sources.

What is the class distribution for the newly detected sources?

Hint - the training set was constructed to have a nearly uniform class distribution, that may not be the case for the actual observed distribution of sources.


In [33]:
svm_clf = SVC(C=1.0, gamma = 0.1, kernel = 'rbf').fit(X, y)

cur.execute("""select gStd, rStd, iStd, zStd, 
                      gAmp,  rAmp,  iAmp,  zAmp,  
                      gMAD, rMAD, iMAD, zMAD, 
                      gBeyond, rBeyond, iBeyond, zBeyond,
                      gSkew, rSkew, iSkew, zSkew,
                      gMinusR, rMinusI, iMinusZ
               from testFeats
               order by objId asc""")
X_new = np.array(cur.fetchall())

y_preds = svm_clf.predict(X_new)

print("""There are {:d}, {:d}, and {:d} sources 
         in classes 1, 2, 3, respectively""".format(*list(np.bincount(y_preds)))) # be careful using bincount


There are 3253, 498, and 1212 sources 
         in classes 1, 2, 3, respectively

Problem 5d

ANOTHER PROBLEM HERE INVESTIGATING THE DATA IN SOME WAY - LIGHT CURVE PLOTS OR SOMETHING, BUT NEED REAL DATA FIRST.

Problem 6) Anomaly Detection

As we learned earlier - one of the primary goals of ANTARES is to reduce the stream of 10 million alerts on any given night to the single (or 10, or 100) most interesting objects. One possible definition of "interesting" is rarity - in which case it would be useful to add some form of anomaly detection to the pipeline. scikit-learn has several different algorithms that can be used for anomaly detection. Here we will employ isolation forest which has many parallels to random forests, which we have previously learned about.

In brief, isolation forest builds an ensemble of decision trees where the splitting parameter in each node of the tree is selected randomly. In each tree the number of branches necessary to isolate each source is measured - outlier sources will, on average, require fewer splittings to be isolated than sources in high-density regions of the feature space. Averaging the number of branchings over many trees results in a relative ranking of the anomalousness (yes, I just made up a word) of each source.

Problem 6a

Using IsolationForest in sklearn.ensemble - determine the 10 most isolated sources in the data set.

Hint - for IsolationForest you will want to use the decision_function() method rather than predict_proba(), which is what we have previously used with sklearn.ensemble models to get relative rankings from the model.


In [34]:
from sklearn.ensemble import IsolationForest

isoF_clf = IsolationForest(n_estimators = 100)
isoF_clf.fit(X_new)
anomaly_score = isoF_clf.decision_function(X_new)

print("The 10 most anomalous sources are: {}".format(np.arange(1,5001)[np.argsort(anomaly_score)[:10]]))


The 10 most anomalous sources are: [ 485 2030 1692  553  856 1579 4116 4562 4916 4455]

Problem 6b

Plot the light curves of the 2 most anomalous sources.

Can you identify why these sources have been selected as outliers?


In [35]:
lc485 = ANTARESlc("test_set_for_LSST_DSFP/FAKE00485.dat")
lc485.plot_multicolor_lc()

lc2030 = ANTARESlc("test_set_for_LSST_DSFP/FAKE02030.dat")
lc2030.plot_multicolor_lc()


Write solution to 6b here

For source 485 - this looks like a supernova at intermediate redshifts. What might be throwing it is the outlier point. We never really made our features very robust to outliers.

For source 2030 - This is a weird faint source with multiple unsynced rises and falls in different bands.

Challenge Problem) Simulate a Real ANTARES

The problem that we just completed features a key difference from the true ANTARES system - namely, all the light curves analyzed had a complete set of observations loaded into the database. One of the key challenges for LSST (and by extension ANTARES) is that the data will be streaming - new observations will be available every night, but the full light curves for all sources won't be available until the 10 yr survey is complete. In this problem, you will use the same data to simulate an LSST-like classification problem.

Assume that your training set (i.e. the first 100 sources loaded into the database) were observed prior to LSST, thus, these light curves can still be used in their entirety to train your classification models. For the test set of observations, simulate LSST by determining the min and max observation date and take 1-d quantized steps through these light curves. On each day when there are new observations, update the feature calculations for every source that has been newly observed. Classify those sources and identify possible anomalies.

Here are some things you should think about as you build this software:

  1. Should you use the entire light curves for training-set objects when classifying sources with only a few data points?
  2. How are you going to handle objects on the first epoch when they are detected?
  3. What threshold (if any) are you going to set to notify the community about rarities that you have discovered

Hint - Since you will be reading these light curves from the database (and not from text files) the ANTARESlc class that we previously developed will not be useful. You will (likely) either need to re-write this class to interact with the database or figure out how to massage the query results to comply with the class definitions.