Using the ACA Analysis Predictions

How does the model perform when we start using it to predict 8 stars? Looking at the Expected Value of Failed Stars for each catalog and comparing it with historical values that actually failed. Let's see what we get.


In [108]:
%load_ext autoreload
%autoreload 2
%matplotlib inline

import numpy as np
import acqstattools as ast
import matplotlib.pyplot as plt
import astropy as ap
from datetime import datetime as dt
from astropy.time import Time

import warnings
warnings.filterwarnings('ignore')


The autoreload extension is already loaded. To reload it, use:
  %reload_ext autoreload

Loading the Data from Files


In [109]:
# Loading the estimates from the trained model
acqstats = 'acqstats_bp6_draws_trained_2011.5_to_2014.5.pkl'
betas = ast.loadacqstats(acqstats, description=True)

#Loading historical star data from file
acq_data = np.load('data/acq_table.npy')

#Adding fields required for analysis
acq_data = ast.add_column(acq_data, 'tstart_jyear' , np.zeros(len(acq_data)))
acq_data = ast.add_column(acq_data, 'tstart_quarter' , np.zeros(len(acq_data)))
acq_data = ast.add_column(acq_data, 'mag_floor' , np.zeros(len(acq_data)))
acq_data = ast.add_column(acq_data, 'year' , np.zeros(len(acq_data)))
acq_data = ast.add_column(acq_data, 'failure' , np.zeros(len(acq_data)))
acq_data['tstart_jyear'] = Time(acq_data['tstart'], format='cxcsec').jyear
acq_data['year'] = np.floor(acq_data.tstart_jyear)
acq_data['mag_floor'] = np.floor(acq_data['mag'])
acq_data['failure'][acq_data['obc_id']=="NOID"] = 1.

#Removing Bad Stars
bad_obsids = [
# Venus
    2411,2414,6395,7306,7307,7308,7309,7311,7312,7313,7314,7315,7317,7318,7406,583,
    7310,9741,9742,9743,9744,9745,9746,9747,9749,9752,9753,9748,7316,15292,16499,
    16500,16501,16503,16504,16505,16506,16502,
# multi-obi catalogs
    943,897,60879,2042,60881,800,1900,2365,906,2010,3182,2547,380,3057,2077,60880,
    2783,1578,1561,4175,3764
]

for badid in bad_obsids:
    acq_data = acq_data[acq_data['obsid']!=badid]


Chandra Star Acquisition Analysis Results
- Method: Bayesian Binomial Probit Regression
    - Second Order Polynomial of Star Magnitude
    - Warm Pixel Fraction
    - Interactions between Mag & WPF

Train Period: 2011.5 to 2014.5
Test Period : 2011.5 to 2014.5

File Output : acqstats_bp6_draws_trained_2011.5_to_2014.5.pkl

Creating Star Catalogs from 3 Years of Historical Data for the script


In [110]:
# Creating star catalogs from historical information.
range13_14 = ast.subset_range_tstart_jyear(acq_data, 2011.0, 2014.0)
uniqueobsids = np.unique(range13_14['obsid'])

obs2013_2014 = {}
for i in uniqueobsids:
    # Script only handles star catalogs with 8 stars presently.  
    if len(range13_14[range13_14['obsid']==int(i)]) != 8:
        print "Skipped {}, Only {} Observations".format(i, len(range13_14[range13_14['obsid']==int(i)])) 
    else:
        obs_dict = {}
        for obs in range13_14[range13_14['obsid']==int(i)]:
            obs_dict.update({obs['agasc_id']:{'mag':obs['mag'],'warm_pix':obs['warm_pix'],
                                              'failure':obs['failure']}})
        obs2013_2014.update({i:obs_dict})


Skipped 13453, Only 7 Observations
Skipped 13894, Only 6 Observations
Skipped 14048, Only 7 Observations
Skipped 14269, Only 7 Observations
Skipped 14338, Only 7 Observations
Skipped 14508, Only 7 Observations
Skipped 14585, Only 7 Observations
Skipped 15323, Only 7 Observations
Skipped 15350, Only 7 Observations
Skipped 15717, Only 7 Observations
Skipped 54381, Only 7 Observations

Parsing through data and binning based on number of successes


In [111]:
#Binning the historical data
zeros = []
ones = []
twos = []
threes = []
fours = []

for obs in obs2013_2014:
    failcount = 0
    for agasc in obs2013_2014[obs]:
        failcount += obs2013_2014[obs][agasc]['failure']
    if failcount > 4:
        print failcount
    else:
        acqevent = ast.acqPredictCatalog(obs2013_2014[obs], betas, summary=False)
        if failcount == 0:
            zeros.append(acqevent)
        if failcount == 1:
            ones.append(acqevent)
        if failcount == 2:
            twos.append(acqevent)
        if failcount == 3:
            threes.append(acqevent)
        if failcount == 4:
            fours.append(acqevent)


5.0
5.0

Plotting Expected Number of Failures for each Realization of Failure Counts (0,1,2,3,4...)


In [112]:
nbins = np.arange(0,8.1,0.1)

F = plt.figure()
plt.subplot(5,1,1)
p1 = plt.hist([acqevent.expectedfailures for acqevent in zeros], 
              bins=nbins, color='r', alpha=0.5, label="Zero Failed")
lgd = plt.legend()
plt.xlim((0,4))
plt.xlabel('Predicted Expected Number of Failures')
plt.subplot(5,1,2)
p1 = plt.hist([acqevent.expectedfailures for acqevent in ones], 
              bins=nbins, color='b', alpha=0.5, label="One Failed")
lgd = plt.legend()
plt.xlim((0,4))
plt.xlabel('Predicted Expected Number of Failures')
plt.subplot(5,1,3)
p1 = plt.hist([acqevent.expectedfailures for acqevent in twos], 
              bins=nbins, color='g', alpha=0.5, label="Two Failed")
lgd = plt.legend()
plt.xlim((0,4))
plt.xlabel('Predicted Expected Number of Failures')
plt.subplot(5,1,4)
p1 = plt.hist([acqevent.expectedfailures for acqevent in threes], 
              bins=nbins, color='purple', alpha=0.5, label="Three Failed")
lgd = plt.legend()
plt.xlim((0,4))
plt.xlabel('Predicted Expected Number of Failures')
plt.subplot(5,1,5)
p1 = plt.hist([acqevent.expectedfailures for acqevent in fours], 
              bins=nbins, color='orange', alpha=0.5, label="Four Failed")
lgd = plt.legend()
plt.xlim((0,4))
plt.xlabel('Predicted Expected Number of Failures')
F.set_size_inches(15,8)
plt.show()


Plotting Standard Deviation of the Estimate for each Realization of Failure Counts (0,1,2,3,4...)


In [113]:
nbins = np.arange(0,8.1,0.1)

F = plt.figure()
plt.subplot(5,1,1)
p1 = plt.hist([acqevent.stddev for acqevent in zeros], 
              bins=nbins, color='r', alpha=0.5, label="Zero Failed")
lgd = plt.legend()
plt.xlim((0,2))
plt.xlabel('Predicted Expected Number of Failures')
plt.subplot(5,1,2)
p1 = plt.hist([acqevent.stddev for acqevent in ones], 
              bins=nbins, color='b', alpha=0.5, label="One Failed")
lgd = plt.legend()
plt.xlim((0,2))
plt.xlabel('Predicted Expected Number of Failures')
plt.subplot(5,1,3)
p1 = plt.hist([acqevent.stddev for acqevent in twos], 
              bins=nbins, color='g', alpha=0.5, label="Two Failed")
lgd = plt.legend()
plt.xlim((0,2))
plt.xlabel('Predicted Expected Number of Failures')
plt.subplot(5,1,4)
p1 = plt.hist([acqevent.stddev for acqevent in threes], 
              bins=nbins, color='purple', alpha=0.5, label="Three Failed")
lgd = plt.legend()
plt.xlim((0,2))
plt.xlabel('Predicted Standard Deviation of Failures')
plt.subplot(5,1,5)
p1 = plt.hist([acqevent.stddev for acqevent in fours], 
              bins=nbins, color='orange', alpha=0.5, label="Four Failed")
lgd = plt.legend()
plt.xlim((0,2))
plt.xlabel('Predicted Standard Deviation of Failures')
F.set_size_inches(10,6)
plt.show()


Plotting the Estimated Failures + One Standard Deviation for each Realization of Failure Counts (0,1,2,3,4...)


In [114]:
nbins = np.arange(0,8.1,0.1)

F = plt.figure()
plt.subplot(5,1,1)
p1 = plt.hist([acqevent.expectedfailures + acqevent.stddev for acqevent in zeros], 
              bins=nbins, color='r', alpha=0.5, label="Zero Failed")
lgd = plt.legend()
plt.xlim((0,4))
plt.xlabel('Predicted Expected Number of Failures + One Standard Deviation')
plt.subplot(5,1,2)
p1 = plt.hist([acqevent.expectedfailures + acqevent.stddev for acqevent in ones], 
              bins=nbins, color='b', alpha=0.5, label="One Failed")
lgd = plt.legend()
plt.xlim((0,4))
plt.xlabel('Predicted Expected Number of Failures + One Standard Deviation')
plt.subplot(5,1,3)
p1 = plt.hist([acqevent.expectedfailures + acqevent.stddev for acqevent in twos], 
              bins=nbins, color='g', alpha=0.5, label="Two Failed")
lgd = plt.legend()
plt.xlim((0,4))
plt.xlabel('Predicted Expected Number of Failures + One Standard Deviation')
plt.subplot(5,1,4)
p1 = plt.hist([acqevent.expectedfailures + acqevent.stddev for acqevent in threes], 
              bins=nbins, color='purple', alpha=0.5, label="Three Failed")
lgd = plt.legend()
plt.xlim((0,4))
plt.xlabel('Predicted Expected Number of Failures + One Standard Deviation')
plt.subplot(5,1,5)
p1 = plt.hist([acqevent.expectedfailures + acqevent.stddev for acqevent in fours], 
              bins=nbins, color='orange', alpha=0.5, label="Four Failed")
lgd = plt.legend()
plt.xlim((0,4))
plt.xlabel('Predicted Expected Number of Failures + One Standard Deviation')

F.set_size_inches(10,6)
plt.show()


Comparing Proportion of Estimated Failures Against other Realizations


In [115]:
zero_expfailed, binedges = np.histogram([acqevent.expectedfailures for acqevent in zeros], bins=np.arange(0,4.1,0.1))
ones_expfailed, binedges = np.histogram([acqevent.expectedfailures for acqevent in ones], bins=np.arange(0,4.1,0.1))
twos_expfailed, binedges = np.histogram([acqevent.expectedfailures for acqevent in twos], bins=np.arange(0,4.1,0.1))
threes_expfailed, binedges = np.histogram([acqevent.expectedfailures for acqevent in threes], bins=np.arange(0,4.1,0.1))
fours_expfailed, binedges = np.histogram([acqevent.expectedfailures for acqevent in fours], bins=np.arange(0,4.1,0.1))

totalcounts = zero_expfailed + ones_expfailed + twos_expfailed + threes_expfailed + fours_expfailed

zero_percent = zero_expfailed / totalcounts.astype(float)
one_percent = ones_expfailed / totalcounts.astype(float)
two_percent = twos_expfailed / totalcounts.astype(float)
three_percent = threes_expfailed / totalcounts.astype(float)
four_percent = fours_expfailed / totalcounts.astype(float)

from matplotlib import gridspec

F = plt.figure(figsize=(12, 6))
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 2]) 
ax0 = plt.subplot(gs[0])
ax0.bar(binedges[:-1],zero_percent, width=0.1, alpha=0.5, color='b')
ax0.bar(binedges[:-1],one_percent, width=0.1, alpha=0.5, color='r', bottom=zero_percent)
ax0.bar(binedges[:-1],two_percent, width=0.1, alpha=0.5, color='g', bottom=zero_percent + one_percent)
ax0.bar(binedges[:-1],three_percent, width=0.1, alpha=0.5, color='purple', bottom=zero_percent + one_percent+ two_percent)
ax0.bar(binedges[:-1],four_percent, width=0.1, alpha=0.5, color='orange', bottom= zero_percent + one_percent+ two_percent+three_percent)
plt.xlabel('Predicted Expected Number of Failures')
plt.ylabel('Proportion Failed')
plt.xlim(0,3.5)
plt.legend(('Zero Failed', 'One Failed', 'Two Failed',
            'Three Failed', 'Four Failed'))
plt.title('Assessing Model Performance with Historical Data - Expected Number of Failures')
ax1 = plt.subplot(gs[1])
ax1.plot(binedges[1:]-0.05, totalcounts)
ax1.set_yscale('log')
plt.xlim((0,3.5))
plt.ylabel('Log Total Observed Counts')
plt.xlabel('Predicted Expected Number of Failures')
plt.show()


Comparing Proportion of Estimated Failures + One Standard Deviation Against other Realizations


In [116]:
zero_expfailed, binedges = np.histogram([acqevent.expectedfailures + acqevent.stddev for acqevent in zeros], bins=np.arange(0,4.1,0.1))
ones_expfailed, binedges = np.histogram([acqevent.expectedfailures + acqevent.stddev for acqevent in ones], bins=np.arange(0,4.1,0.1))
twos_expfailed, binedges = np.histogram([acqevent.expectedfailures + acqevent.stddev for acqevent in twos], bins=np.arange(0,4.1,0.1))
threes_expfailed, binedges = np.histogram([acqevent.expectedfailures + acqevent.stddev for acqevent in threes], bins=np.arange(0,4.1,0.1))
fours_expfailed, binedges = np.histogram([acqevent.expectedfailures + acqevent.stddev for acqevent in fours], bins=np.arange(0,4.1,0.1))

totalcounts = zero_expfailed + ones_expfailed + twos_expfailed + threes_expfailed + fours_expfailed

zero_percent = zero_expfailed / totalcounts.astype(float)
one_percent = ones_expfailed / totalcounts.astype(float)
two_percent = twos_expfailed / totalcounts.astype(float)
three_percent = threes_expfailed / totalcounts.astype(float)
four_percent = fours_expfailed / totalcounts.astype(float)

from matplotlib import gridspec

F = plt.figure(figsize=(12, 6))
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 2]) 
ax0 = plt.subplot(gs[0])
ax0.bar(binedges[:-1],zero_percent, width=0.1, alpha=0.5, color='b')
ax0.bar(binedges[:-1],one_percent, width=0.1, alpha=0.5, color='r', bottom=zero_percent)
ax0.bar(binedges[:-1],two_percent, width=0.1, alpha=0.5, color='g', bottom=zero_percent + one_percent)
ax0.bar(binedges[:-1],three_percent, width=0.1, alpha=0.5, color='purple', bottom=zero_percent + one_percent+ two_percent)
ax0.bar(binedges[:-1],four_percent, width=0.1, alpha=0.5, color='orange', bottom= zero_percent + one_percent+ two_percent+three_percent)
plt.xlabel('Predicted Expected Number of Failures + One Standard Deviation')
plt.ylabel('Proportion Failed')
plt.xlim(0,4.5)
plt.legend(('Zero Failed', 'One Failed', 'Two Failed',
            'Three Failed', 'Four Failed'))
plt.title('Assessing Model Performance with Historical Data - Expected Number of Failures + One Standard Deviation ')
ax1 = plt.subplot(gs[1])
ax1.plot(binedges[1:]-0.05, totalcounts)
ax1.set_yscale('log')
plt.xlim((0,4.5))
plt.ylabel('Log Total Observed Counts')
plt.xlabel('Predicted Expected Number of Failures + One Standard Deviation')
plt.show()


TESTING TESTING TESTING


In [117]:
starcat = {'43648032':{'mag':10.3237543106, 'warm_pix':0.157814292736},
            '43649080':{'mag':6.99063110352, 'warm_pix':0.157814292736},
            '43650048':{'mag':10.0143651962, 'warm_pix':0.157814292736},
            '43650856':{'mag':9.42395305634, 'warm_pix':0.157814292736},
            '43652384':{'mag':10.1900663376, 'warm_pix':0.157814292736},
            '43656176':{'mag':8.27156066895, 'warm_pix':0.157814292736},
            '43656696':{'mag':6.34437942505, 'warm_pix':0.157814292736},
            '43656184':{'mag':10.3632602692, 'warm_pix':0.157814292736}}
test = ast.acqPredictCatalog(starcat, betas)
test.starpredictions[0].summary()


Expected Number of Failures: 1.681 +/- 0.682967162516

Star Acquisition Probability Tables:
------------------------------------------------------------------

At Least Acquiring, At Most Failing [Lower Bound, Upper Bound]:
------------------------------------------------------------------
8 Stars Acquire, 0 failing: 0.11377728 	[0.08690856, 0.14489314]
7 Stars Acquire, 1 failing: 0.44503136 	[0.38072044, 0.50858038]
6 Stars Acquire, 2 failing: 0.79782781 	[0.74483146, 0.84200881]
5 Stars Acquire, 3 failing: 0.96439189 	[0.94654788, 0.97656349]
4 Stars Acquire, 4 failing: 0.99774019 	[0.99542268, 0.99892962]
3 Stars Acquire, 5 failing: 0.99994822 	[0.99984251, 0.9999866 ]
2 Stars Acquire, 6 failing: 0.99999963 	[0.99999824, 0.99999995]
1 Stars Acquire, 7 failing: 1.0        	[0.99999999, 1.0       ]
0 Stars Acquire, 8 failing: 1.0        	[1.0       , 1.0       ]
                

    Probability Distribution Function for Star Catalog
                
Prediction Summaries for ID 43656176:
-----------------------------------------------------------------------------
Star Magnitude                        :  8.27156066895
Warm Pixel Fraction                   :  0.157814292736
Estimated Mean Probability of Failure :  0.00421879020452
95% Credible Interval  [Lower, Upper] : [0.00293215318968, 0.00576770675171]

Summary Histogram Plot:
-----------------------------------------------------------------------------

In [117]:


In [118]:
# ObsID

starcatGood = {"525609504":{'mag':9.241, 'warm_pix':0.0803082109881},
           "525735456":{'mag':7.206, 'warm_pix':0.0803082109881},
           "525736400":{'mag':9.146, 'warm_pix':0.0803082109881},
           "525732232":{'mag':9.117, 'warm_pix':0.0803082109881},
           "525732528":{'mag':9.318, 'warm_pix':0.0803082109881},
           "525731944":{'mag':9.478, 'warm_pix':0.0803082109881},
           "525734296":{'mag':9.484, 'warm_pix':0.0803082109881},
           "525739240":{'mag':9.558, 'warm_pix':0.0803082109881}}
test = ast.acqPredictCatalog(starcatGood, betas)


Expected Number of Failures: 0.1345 +/- 0.335837804114

Star Acquisition Probability Tables:
------------------------------------------------------------------

At Least Acquiring, At Most Failing [Lower Bound, Upper Bound]:
------------------------------------------------------------------
8 Stars Acquire, 0 failing: 0.87297717 	[0.85823081, 0.88694275]
7 Stars Acquire, 1 failing: 0.99277757 	[0.99091738, 0.99433233]
6 Stars Acquire, 2 failing: 0.99976942 	[0.9996714 , 0.99984145]
5 Stars Acquire, 3 failing: 0.99999553 	[0.99999274, 0.99999732]
4 Stars Acquire, 4 failing: 0.99999995 	[0.9999999 , 0.99999997]
3 Stars Acquire, 5 failing: 1.0        	[1.0       , 1.0       ]
2 Stars Acquire, 6 failing: 1.0        	[1.0       , 1.0       ]
1 Stars Acquire, 7 failing: 1.0        	[1.0       , 1.0       ]
0 Stars Acquire, 8 failing: 1.0        	[1.0       , 1.0       ]
                

    Probability Distribution Function for Star Catalog
                

In [119]:
starcat = {'813832200':{'mag':9.61997318268, 'warm_pix':0.146424291198, 'obc_id':"ID"},
'813833224':{'mag':9.67224121094, 'warm_pix':0.146424291198, 'obc_id':"ID"},
'813836688':{'mag':8.68381690979, 'warm_pix':0.146424291198, 'obc_id':"ID"},
'813836992':{'mag':10.4075469971, 'warm_pix':0.146424291198, 'obc_id':"NOID"},
'813957656':{'mag':9.94153213501, 'warm_pix':0.146424291198, 'obc_id':"NOID"},
'813834008':{'mag':10.3945388794, 'warm_pix':0.146424291198, 'obc_id':"ID"},
'813837240':{'mag':10.4958143234, 'warm_pix':0.146424291198, 'obc_id':"NOID"},
'813960160':{'mag':10.7895717621, 'warm_pix':0.146424291198, 'obc_id':"ID"}}
test = ast.acqPredictCatalog(starcat, betas)
# for star in starcat:
#     ast.acqPredict1(starcat[star]['mag'], starcat[star]['warm_pix'], betas, agasc=int(star)).summary()


Expected Number of Failures: 2.659 +/- 0.556078859127

Star Acquisition Probability Tables:
------------------------------------------------------------------

At Least Acquiring, At Most Failing [Lower Bound, Upper Bound]:
------------------------------------------------------------------
8 Stars Acquire, 0 failing: 0.017516806 	[0.011081239, 0.026690747]
7 Stars Acquire, 1 failing: 0.14340559 	[0.10793632, 0.18604528]
6 Stars Acquire, 2 failing: 0.44794667 	[0.38319995, 0.51486194]
5 Stars Acquire, 3 failing: 0.78122306 	[0.73159008, 0.82590739]
4 Stars Acquire, 4 failing: 0.95505776 	[0.93907488, 0.96777893]
3 Stars Acquire, 5 failing: 0.99570762 	[0.9935494 , 0.99723505]
2 Stars Acquire, 6 failing: 0.99983862 	[0.99972878, 0.99990745]
1 Stars Acquire, 7 failing: 0.99999897 	[0.9999979 , 0.99999952]
0 Stars Acquire, 8 failing: 1.0        	[1.0       , 1.0       ]
                

    Probability Distribution Function for Star Catalog
                

In [120]:
starcat = {
'257557024':{'mag':8.13523864746, 'warm_pix':0.133249026193, 'obc_id':'ID'},
'257557392':{'mag':10.4494104385, 'warm_pix':0.133249026193, 'obc_id':'ID'},
'257558960':{'mag':8.00120258331, 'warm_pix':0.133249026193, 'obc_id':'ID'},
'257560064':{'mag':8.50957202911, 'warm_pix':0.133249026193, 'obc_id':'ID'},
'257562816':{'mag':9.9642906189, 'warm_pix':0.133249026193, 'obc_id':'NOID'},
'257563488':{'mag':10.1029891968, 'warm_pix':0.133249026193, 'obc_id':'ID'},
'257563520':{'mag':9.97522926331, 'warm_pix':0.133249026193, 'obc_id':'ID'},
'257565080':{'mag':8.83695888519, 'warm_pix':0.133249026193, 'obc_id':'ID'}}
test = ast.acqPredictCatalog(starcat, betas)


Expected Number of Failures: 1.02 +/- 0.745631325143

Star Acquisition Probability Tables:
------------------------------------------------------------------

At Least Acquiring, At Most Failing [Lower Bound, Upper Bound]:
------------------------------------------------------------------
8 Stars Acquire, 0 failing: 0.29323156 	[0.2658015 , 0.32269129]
7 Stars Acquire, 1 failing: 0.7371987  	[0.70907675, 0.76480252]
6 Stars Acquire, 2 failing: 0.95272801 	[0.94357175, 0.9609464 ]
5 Stars Acquire, 3 failing: 0.99642014 	[0.99532392, 0.99731107]
4 Stars Acquire, 4 failing: 0.99993436 	[0.99989843, 0.99995922]
3 Stars Acquire, 5 failing: 0.99999953 	[0.99999911, 0.99999976]
2 Stars Acquire, 6 failing: 1.0        	[1.0       , 1.0       ]
1 Stars Acquire, 7 failing: 1.0        	[1.0       , 1.0       ]
0 Stars Acquire, 8 failing: 1.0        	[1.0       , 1.0       ]
                

    Probability Distribution Function for Star Catalog
                

In [121]:
starcat = {
'1197749776':{'mag':9.82180118561, 'warm_pix':0.146037858242, 'obc_id':'ID'},
'1198191400':{'mag':8.35087585449, 'warm_pix':0.146037858242, 'obc_id':'ID'},
'1198192176':{'mag':9.49861240387, 'warm_pix':0.146037858242, 'obc_id':'ID'},
'1198192408':{'mag':9.44103050232, 'warm_pix':0.146037858242, 'obc_id':'ID'},
'1197740448':{'mag':10.0938825607, 'warm_pix':0.146037858242, 'obc_id':'NOID'},
'1197750848':{'mag':10.1972503662, 'warm_pix':0.146037858242, 'obc_id':'ID'},
'1198192088':{'mag':9.18758773804, 'warm_pix':0.146037858242, 'obc_id':'ID'}
}
test = ast.acqPredictCatalog(starcat, betas)


Star Catalogs need 8 Stars...

...or so I've been told.


In [121]:


In [121]:


In [121]:


In [121]: