First, we are implementng a powerlaw parameterization for planet radius and perido/insolation using Stan (software for Bayesian statistical inference with MCMC sampling (NUTS, HMC) http://mc-stan.org/), in order to calculate occurrence rates based off of work by Chris Burke and the open science blog post by Dan Foreman-Mackey: The original exopop.ipynb from Dan Foreman-Mackey, is found at http://dan.iel.fm/downloads/notebooks/exopop.ipynb, and the accompanying blog post is at http://dan.iel.fm/posts/exopop/ where he reproduces the work done by Chris Burke in a Python Jupyter notebook. Some parts of this expanded notebook were developed at the exoplanet population hackathon by Ravi and Joe Catanzarite. Joe Catanzarite is co-investigator on this project and shares responsibility for the code development.

8/15/2016 This PyStan model version uses pre-computed completeness as input, for testing and developement.

8/18/2016 We now have a working version of the power law occurrence rate model using PyStan that agrees with the results from the Burke calculations and the DFM python notebook blog post implementation. Now we will add radius uncertanties into the completeness calculations.

8/19/2016 Changing code back to radius/period and adding radius uncertainties.

8/22/2016 Code has a switch to select period or insolation, catalog version, stellar type.

8/28/2016 Running current model (PyStan powerlaw analysis model with radius uncertainties) on similar parameter space to DFM GP occurrence rate project/paper. 0.75 to 2.5 Rearth Radii, 20-320 day Period (ignores high false alarm prob at large period), G stars (5300K to 6000K), 9.1 catalog, duty cycle cut at >0.5, and dutycycle times dataspan at > 2 years (minimum data coverage of 2 years, stellar mass in finite. No log g cuts at this time.

9/9/2016 Working on adding asymmetric planet radius uncertainties.

11/13/2017 Megan Shabram picking back up this online, and resuming this project's github track. Issues with the probabilistic Stan Model have been corrected. There was a bug where the true radius and observed radius were switched. The code now needs to be run for enough iterations to show signs of convergence. I am deciding to hold off on working with planet insolation because the dependence on the semi-major axis is an issue (Need more information about stellar multiplicity from hosts and nonb-hosts).

11/16/2017 Did two runs 10,000, and next tried 30,000 iterations but still signs of having not converged persist. It does look like it is on track. Results in part 2 below have HMC diagnostics and have not converged.

TO DO:

  • Full posteriors for the planet radii (e.g., how to model the uncertainty in planet radius).
  • GP bin height parameterization.
  • Stellar catalog uncertainties (e.g., Teff, Mass, Radius, etc.).
  • Reliability
  • Multiplicity
  • Interface with catalog 9.3, V1 occurrence rate products (e.g., completeness contours that are built from the 1-sigma depth function and numerical window function).
  • Explore other analysis models.

In [91]:
#%%=========================================================
# Intialize
# Had to put __future__ before other imports
from __future__ import division, print_function
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["font.size"] = 20
import os
import requests
import pandas as pd
from cStringIO import StringIO
import pickle

# The following commands are for use in the notebook environment
# Comment them out if this code is to be run in a python or ipython shell
%matplotlib inline
%config InlineBackend.figure_format = "retina"

In [92]:
#%%=========================================================
# This function downloads and caches a dataset from the exoplanet archive
# into a pandas frame
def get_catalog(name, basepath="data"):
    fn = os.path.join(basepath, "{0}.h5".format(name))
    if os.path.exists(fn):
        return pd.read_hdf(fn, name)
    if not os.path.exists(basepath):
        os.makedirs(basepath)
    print("Downloading {0}...".format(name))
    url = ("http://exoplanetarchive.ipac.caltech.edu/cgi-bin/nstedAPI/"
           "nph-nstedAPI?table={0}&select=*").format(name)
    r = requests.get(url)
    if r.status_code != requests.codes.ok:
        r.raise_for_status()
    fh = StringIO(r.content)
    df = pd.read_csv(fh)
    df.to_hdf(fn, name, format="t")
    return df

In [93]:
periodInsolationSwitch = 'P' #raw_input("Period or Insolation: P or I? -- ")

In [94]:
catalogSwitch = '2' # raw_input("Catalog 9.1 or 9.2: 1 or 2? -- ")

In [95]:
stellarTypeSwitch = raw_input("Stellar type: GK, A, F, G, K, or M? -- ")


Stellar type: GK, A, F, G, K, or M? -- GK

In [96]:
#%%=========================================================
# Get the stellar catalog and make selection cuts   
import numpy as np

# Choose desired stellar catalog
if(catalogSwitch == "1"):
    #!!!!! Q1-Q16 (9.1 pipeline)
    stlr = get_catalog("q1_q16_stellar")
elif(catalogSwitch == "2"):
    #!!!!! Q1-Q17 DR 24 (9.2 pipeline)
    stlr = get_catalog("q1_q17_dr24_stellar")


# Select stellar type
if(stellarTypeSwitch == 'A'):
    # !!!!! Select A dwarfs.
    m = (7300 <= stlr.teff) & (stlr.teff <= 10000)

elif(stellarTypeSwitch == 'F'):
    # !!!!! Select F dwarfs.
    m = (6000 <= stlr.teff) & (stlr.teff <= 7300)

elif(stellarTypeSwitch == 'G'):
    # !!!!! Select G dwarfs.
    m = (5300 <= stlr.teff) & (stlr.teff <= 6000)

elif(stellarTypeSwitch == 'K'):
    # !!!!! Select K dwarfs.
    m = (3900 <= stlr.teff) & (stlr.teff <= 5300)

elif(stellarTypeSwitch == 'M'):
    # !!!!! Select M dwarfs.
    m = (2400 <= stlr.teff) & (stlr.teff <= 3900)

elif(stellarTypeSwitch == 'GK'):
    # !!!!! Select GK dwarfs used i DFM original notebook.
    m = (4200 <= stlr.teff) & (stlr.teff <= 6100)


# stellar radius cut
# m &= stlr.radius <= 1.15

# logg cut
m &= stlr.logg > 4.0

# Only include stars with sufficient data coverage:

# Minimum dataspan of 2 years
m &= stlr.dataspan > 365.25*2.

# Minimum dutycycle of 1/2
#m &= stlr.dutycycle > 0.50
#### 0.6 in DFM original notebook:
m &= stlr.dutycycle > 0.60


# Minimum data coverage of 2 years
# comment out to compare to DFM origina notebook
#m &= stlr.dutycycle*stlr.dataspan > 365.25*2.

# maximum rms cdpp at 7.5 hour pulse (uncomment to compare to DFM original notebook)
m &= stlr.rrmscdpp07p5 <= 1000.

# Only select stars with mass estimates.
m &= np.isfinite(stlr.mass)

# put selected data into pandas data frame
base_stlr = pd.DataFrame(stlr)
stlr = pd.DataFrame(stlr[m])

print("Selected {0} targets after cuts".format(len(stlr)))


Selected 96823 targets after cuts

In [97]:
#%%=========================================================
# Plot an HR diagram of the selected targets
import matplotlib.pyplot as pl

pl.plot(base_stlr.teff, base_stlr.logg, ".k", ms=3, alpha=0.5)
pl.plot(stlr.teff, stlr.logg, ".r", ms=3, alpha=0.5)
pl.xlim(9500, 2500)
pl.ylim(5.5, 3)
pl.ylabel("$\log g$");
pl.xlabel("$T_\mathrm{eff}$");



In [98]:
#%%=========================================================

# !!!!! Get the planet catalog and make selection cuts 

if(catalogSwitch == "1"):
    # !!!!! Q1-Q16 planet catalog (9.1 pipeline)
    kois = get_catalog("q1_q16_koi")

elif(catalogSwitch == "2"):
    # !!!!! Q1-Q17 DR24 (9.2 pipeline)
    kois = get_catalog("q1_q17_dr24_koi")


# Set insolation (or period) and planet radius ranges
# for power law fit
rp_rng = (0.75, 2.5)
if(periodInsolationSwitch=="I"):
    x_rng = (0.2, 20)
elif(periodInsolationSwitch=="P"):
    x_rng = (20, 300)

# Join on the stellar list.
kois = pd.merge(kois, stlr[["kepid"]], on="kepid", how="inner")

# Only select the KOIs in the relevant part of parameter space.
m = kois.koi_pdisposition == "CANDIDATE"
base_kois = pd.DataFrame(kois[m])

# Select planets based on either insolation or period
if(periodInsolationSwitch=='I'):
    # Select based on insolation range instead of period range
    m &= (x_rng[0] <= kois.koi_insol) & (kois.koi_insol <= x_rng[1])
elif(periodInsolationSwitch=='P'):
    # Select based on period range instead of insolation range
    m &= (x_rng[0] <= kois.koi_period) & (kois.koi_period <= x_rng[1])

# Select planet radius range
m &= np.isfinite(kois.koi_prad) & (rp_rng[0] <= kois.koi_prad) & (kois.koi_prad <= rp_rng[1])

# !!!!! Only include PCs with MES > 15 for 9.2 (Q1-Q17 Dr24)
# !!!!! For 9.1 (Q1-Q16) -- all max_mult_ev seem to be NaNs
if(catalogSwitch == "2"):
    m &= kois.koi_max_mult_ev > 15

# Panda data frame for selected kois
# Note that kois now contains only the selected planet candidates
kois = pd.DataFrame(kois[m])

#print("min insolation = {0} ".format(np.min((kois.koi_insol))))
#print("max insolation = {0} ".format(np.max((kois.koi_insol))))
#print(kois.koi_insol)


print("min period = {0} ".format(np.min(kois.koi_period)))
print("max period = {0} ".format(np.max(kois.koi_period)))
print("Selected {0} KOIs after cuts".format(len(kois)))


min period = 20.0984215 
max period = 289.864456 
Selected 227 KOIs after cuts

In [99]:
#%%=========================================================
# Plot the measurements with error bars, in insolation-radius parameter space
yerr = np.abs(np.array(base_kois[["koi_prad_err2", "koi_prad_err1"]])).T

if(periodInsolationSwitch=="I"):
    pl.xlabel("insolation [Earth units]")
    pl.errorbar(base_kois.koi_insol, base_kois.koi_prad, yerr=yerr, fmt=".k", ms=4,
            capsize=0, alpha=0.3)
    pl.plot(kois.koi_insol, kois.koi_prad, ".k", ms=6)
elif(periodInsolationSwitch=="P"):
    pl.xlabel("Period [Days]")
    pl.errorbar(base_kois.koi_period, base_kois.koi_prad, yerr=yerr, fmt=".k", ms=4,
            capsize=0, alpha=0.3)
    pl.plot(kois.koi_period, kois.koi_prad, ".k", ms=6)
pl.fill_between(x_rng, [rp_rng[1], rp_rng[1]], [rp_rng[0], rp_rng[0]], color="r", alpha=0.2)
pl.xlim(x_rng + 0.1 * np.array([-1, 1]))
pl.ylim(rp_rng + 0.5 * np.array([-1, 1]))


pl.ylabel("$R_p \, [R_\oplus]$");



In [100]:
#%%=========================================================
# Completeness model helper functions (radius and orbital period)

from scipy.stats import gamma

def get_duration(period, aor, e):
    """
    Equation (1) from Burke et al. This estimates the transit
    duration in the same units as the input period. There is a
    typo in the paper (24/4 = 6 != 4).
    
    :param period: the period in any units of your choosing
    :param aor:    the dimensionless semi-major axis (scaled
                   by the stellar radius)
    :param e:      the eccentricity of the orbit
    
    """
    return 0.25 * period * np.sqrt(1 - e**2) / aor

def get_a(period, mstar, Go4pi=2945.4625385377644/(4*np.pi*np.pi)):
    """
    Compute the semi-major axis of an orbit in Solar radii.
    
    :param period: the period in days
    :param mstar:  the stellar mass in Solar masses
    
    """
    return (Go4pi*period*period*mstar) ** (1./3)

def get_delta(k, catalogSwitch, c=1.0874, s=1.0187):
    """
    Estimate the approximate expected transit depth as a function
    of radius ratio. There might be a typo here. In the paper it
    uses c + s*k but in the public code, it is c - s*k:
    https://github.com/christopherburke/KeplerPORTs
    
    :param k: the dimensionless radius ratio between the planet and
              the star
    
    """
    delta_max = k*k * (c + s*k)
    
    if(catalogSwitch == '1'):
        maxDepth = 0.84*delta_max
        # !!!!! For Q1-Q16 (9.1 pipeline) DFM used a multiplier 0f 0.84 instead of 1.0 in the equation below
        # return 0.84*delta_max
    elif(catalogSwitch == '2'):
        # !!!!! for Q1-Q17 DR24 (9.2 pipeline), use a multiplier of 1
        #return 1.0* delta_max
        maxDepth = 1.0*delta_max
    
    return maxDepth

# 14 pulse durations
cdpp_cols = [k for k in stlr.keys() if k.startswith("rrmscdpp")]
cdpp_vals = np.array([k[-4:].replace("p", ".") for k in cdpp_cols], dtype=float)

def get_mes(star, period, rp, tau, catalogSwitch, re=0.009171):
    """
    Estimate the multiple event statistic value for a transit.
    
    :param star:   a pandas row giving the stellar properties
    :param period: the period in days
    :param rp:     the planet radius in Earth radii
    :param tau:    the transit duration in hours
    
    """
    # Interpolate the RMS CDPP corresponding to the transit duration.
    cdpp = np.array(star[cdpp_cols], dtype=float)
    sigma = np.interp(tau, cdpp_vals, cdpp)

    # Compute the radius ratio and estimate the S/N.
    k = rp * re / star.radius
    snr = get_delta(k, catalogSwitch) * 1e6 / sigma
    
    # Scale by the estimated number of transits.
    ntrn = star.dataspan * star.dutycycle / period 
    return snr * np.sqrt(ntrn)

# Pre-compute and freeze the gamma function from Equation (5) in
# Burke et al.

# Parameters gamma function model of detection efficiency vs. MES curve
if(catalogSwitch == '1'):
    # !!!!! Q1-Q16 (9.1 pipeline): DFM used the parameters below (GK stars)
     pgam = gamma(4.65, loc=0., scale=0.98)
    # But the parameters for FGK are (4.35, 0, 1.05) according to astro-ph 1507.05097 (Christiansen)
    #pgam = gamma(4.35, loc=0., scale=1.05)
    
elif(catalogSwitch == '2'):
    # !!!!! Parameters for 9.2 pipeline Q1-Q17 DR24 are from Jessie Christiansen
    # Note that these parameters do not apply to M stars!
    pgam = gamma(103.0113, loc=0., scale=0.10583)

# mesthres_cols are column names for the 14 pulse durations
mesthres_cols = [k for k in stlr.keys() if k.startswith("mesthres")]

# pulse_durations_obs are the 14 pulse durations
pulse_durations_obs = np.array([k[-4:].replace("p", ".") for k in mesthres_cols],
                         dtype=float)
def get_pdet(star, aor, period, rp, e, catalogSwitch):
    """
    Equation (5) from Burke et al. Estimate the detection efficiency
    for a transit.
    
    :param star:   a pandas row giving the stellar properties
    :param aor:    the dimensionless semi-major axis (scaled
                   by the stellar radius)
    :param period: the period in days
    :param rp:     the planet radius in Earth radii
    :param e:      the orbital eccentricity
    
    """
    # mest is the interpolated MES threshold corresponding to the transit duration
    # tau is the pulse duration
    # pulse_durations_obs are the 14 pulse durations
    # np.array(star[mesthres_cols],dtype=float) are the coresponding MES thresholds
    tau = get_duration(period, aor, e) * 24.
    mes = get_mes(star, period, rp, tau, catalogSwitch)
    mest = np.interp(tau, pulse_durations_obs,
                     np.array(star[mesthres_cols],
                              dtype=float))
    x = mes - 4.1 - (mest - 7.1)
    
    # Gamma Parameters for detection efficiency vs. MES model
    if(catalogSwitch == "1"):
        # !!!!! For the 9.1 data
        # return pgam.cdf(x)
        gammaParams = pgam.cdf(x)
    elif(catalogSwitch == "2"):
        # !!!!! DFM originally used no multiplier in the equation below;
        # The multiplier of 0.78442 must be a 'plateau factor' from Jessie, for 9.2
        # for the 9.2 Q1-Q17 DR24 data, provided by Jessie
        gammaParams = 0.78442*pgam.cdf(x)
    
    return gammaParams


def get_pwin(star, period):
    """
    Equation (6) from Burke et al. Estimates the window function
    using a binomial distribution.
    
    :param star:   a pandas row giving the stellar properties
    :param period: the period in days
    
    """
    M = star.dataspan / period
    f = star.dutycycle
    omf = 1.0 - f
    pw = 1 - omf**M - M*f*omf**(M-1) - 0.5*M*(M-1)*f*f*omf**(M-2)
    msk = (pw >= 0.0) * (M >= 2.0)
    return pw * msk

def get_pgeom(aor, e):
    """
    The geometric transit probability.
    
    See e.g. Kipping (2014) for the eccentricity factor
    http://arxiv.org/abs/1408.1393
    
    :param aor: the dimensionless semi-major axis (scaled
                by the stellar radius)
    :param e:   the orbital eccentricity

    """
    return 1. / (aor * (1 - e*e)) * (aor > 1.0)

In [101]:
#================================================================================================
#%% This cell contains functions and code for calculating completeness in the
#       parameter space of [ insolation or Period , planet radius ] 
    
    
# Compute completeness grid for period vs radius
def get_completeness(star, period, rp, e, catalogSwitch,with_geom=True):
    """
    A helper function to combine all the completeness effects.
    
    :param star:      a pandas row giving the stellar properties (each row is for one star)
    :param period:    the period in days 
    :param rp:        the planet radius in Earth radii
    :param e:         the orbital eccentricity
    :param with_geom: include the geometric transit probability
    
    """
    aor = get_a(period, star.mass) / star.radius
    pdet = get_pdet(star, aor, period, rp, e, catalogSwitch)
    pwin = get_pwin(star, period)
    if not with_geom:
        return pdet * pwin
    pgeom = get_pgeom(aor, e)
    return pdet * pwin * pgeom
    #print(len(rp))

# Construct grid for planet radius
rp2 = np.linspace(rp_rng[0], rp_rng[1], 61)

if(periodInsolationSwitch == "P"):

    # Period grid
    period = np.linspace(x_rng[0], x_rng[1], 57)
    xLinearGrid = period
    x_grid, rp_grid2 = np.meshgrid(period, rp2, indexing="ij")


elif(periodInsolationSwitch == "I"):

    # Construct grid for insolation
    insolation = np.linspace(x_rng[0], x_rng[1], 57)
    xLinearGrid = insolation
    x_grid, rp_grid2 = np.meshgrid(insolation, rp2, indexing="ij")

# Compute completeness grid for insolation vs radius
def get_completeness_from_insolation(star, insolation_grid, rp_grid2, e, catalogSwitch, with_geom=True):
    
    # compute the periods corresponding to an insolation grid
    insolation = insolation_grid
    period_grid = get_period_from_insolation( star , insolation )
    
    # completeness 
    completeness = get_completeness(star, period_grid, rp_grid2, e, catalogSwitch, with_geom=True)
    
    return completeness
    
# Compute insolation on the period grid, for a given star 
def get_insolation_from_period( star , period ):
    
    # Get needed stellar parameters
    teffStar = star.teff
    teffSun = 5777
    rStar = star.radius
    mStar = star.mass
    
    # Semimajor axis of planet in AU
    aPlanet = mStar**(1.0/3.0) * (period/365.25)**(2.0/3.0) 
    
    # Compute insolation
    insolation = (teffStar/teffSun)**4*(rStar/1)**2*(1/aPlanet)**2
    
    return insolation

# Compute period on the insolation grid, for a given star
def get_period_from_insolation( star , insolation ):
    
    # Get needed stellar parameters
    teffStar = star.teff
    teffSun = 5777
    rStar = star.radius
    mStar = star.mass
    
    # Get semimajor axis from star properties and insolation, using
    # insolation = ( teffStar / teffSun )**4 * ( rStar / 1)**2 * ( 1 / aPlanet )**2
    aPlanet = ( ( teffStar / teffSun )**4 * ( rStar / 1)**2 / insolation )**(0.5)
    
    # Get orbit period in days from semimajor axis of planet in AU and start properties, using
    # aPlanet = mStar**(1.0/3.0) * (period/365.25)**(2.0/3.0)
    period = 365.25 * ( aPlanet/( mStar**(1.0/3.0) ) )**(3.0/2.0)
    
    return period

In [102]:
#%%==============================================================================================
# Marginalize detection contours over all selected targets
# including the geometric factor. This takes a few minutes.
new_completeness = np.zeros_like(x_grid)
if(periodInsolationSwitch == "P"):
    for _, star in stlr.iterrows():
        new_completeness += get_completeness(star, x_grid, rp_grid2, 0.0, catalogSwitch, with_geom=True)
elif(periodInsolationSwitch == "I"):
    for _, star in stlr.iterrows():
        new_completeness += get_completeness_from_insolation(star, x_grid, rp_grid2, 0.0, catalogSwitch, with_geom=True)

In [46]:
#%%=========================================================
# Reproducing Figure 1 from Burke paper, which is the
# completeness contour (not including geometric effect) for an example target
 
# Option for new insolation completeness.  
    
# Choose the first star
star = stlr[stlr.kepid == stlr.kepid.iloc[0]].iloc[0]

# Compute the completeness map on a grid.
X, Y = x_grid, rp_grid2 

if(periodInsolationSwitch == "P"):
    Z = get_completeness(star, X, Y, 0.0, catalogSwitch, with_geom=False)

elif(periodInsolationSwitch == "I"):
    Z = get_completeness_from_insolation(star, X, Y, 0.0, catalogSwitch, with_geom=False)

# Plot with the same contour levels as the figure. Add some contours at low end of completeness.
c = pl.contour(X, Y, Z, [0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, 0.99], colors="k")
pl.clabel(c, fontsize=12, inline=1, fmt="%.4f") 

if(periodInsolationSwitch == "I"):
    pl.xlabel("insolation [Earth units]")
elif(periodInsolationSwitch == "P"):
    pl.xlabel("period [days]")

pl.ylabel("$R_p \, [R_\oplus]$")
pl.title("det. eff. for KIC {0}".format(np.min(stlr.kepid.iloc[0])));



In [47]:
#%%=========================================================
# Plot the average new_completeness contour (radius-period or radius-insolation)
# Include the geometric effect
pl.pcolor(x_grid, rp_grid2, new_completeness, cmap="Blues")
c = pl.contour(x_grid, rp_grid2, new_completeness / len(stlr),
               colors="k", alpha=0.8)
pl.clabel(c, fontsize=12, inline=1, fmt="%.3f")
pl.title("mean pipeline detection efficiency")
# 10.3.17, fixed to Orbital Period at beginning of notebook.
pl.xlabel("Orbital Period [Days]")
#pl.xlabel("insolation [Earth units]")
pl.ylabel("$R_p \, [R_\oplus]$");

print(np.max(new_completeness/len(stlr)))
print(np.min(new_completeness/len(stlr)))


0.0292425179079
5.14257221477e-06

In [48]:
#%%=========================================================
# Population inference with an independent power law model
# Using modified code above that computes 
# completeness in the parameter space of [ period or insolation , planet radius ], 

# A double power law model for the population.
''' def population_model_insolation(theta, insolation, rp):
    # Parameters 
    # lnf0 is normalization, 
    # beta is exponent of insolation power law,
    # alpha is exponent of radius power law
    lnf0, beta, alpha = theta
    v = np.exp(lnf0) * np.ones_like(insolation)
    for x, rng, n in zip((insolation, rp),
                         (insolation_rng, rp_rng),
                         (beta, alpha)):
        n1 = n + 1
        v *= x**n*n1 / (rng[1]**n1-rng[0]**n1)
    return v
'''

# A double power law model for the population. 
# xLinerGrid is Period or Insolation grid. 
def population_model(theta, xLinearGrid, rp):
    # Parameters 
    # lnf0 is normalization, 
    # beta is exponent of insolation power law,
    # alpha is exponent of radius power law
    lnf0, beta, alpha = theta
    v = np.exp(lnf0) * np.ones_like(xLinearGrid)
    for x, rng, n in zip((xLinearGrid, rp),
                         (x_rng, rp_rng),
                         (beta, alpha)):
        n1 = n + 1
        v *= x**n*n1 / (rng[1]**n1-rng[0]**n1)
    return v


# The ln-likelihood function given at the top of this post.
# change to insolation from planet catalog using prompts at beginning of notebook.

# Period or Insolation, and radius for planets in catalog
if(periodInsolationSwitch == "P"):
    koi_x = np.array(kois.koi_period)
elif(periodInsolationSwitch == "I"):
    koi_x = np.array(kois.koi_insol)
    
koi_rps = np.array(kois.koi_prad)

# Parameter space volume in each bin of [period or insolation, radius] grid
# Note the bins are not uniformly spaced in period or insolation
vol = np.diff(x_grid, axis=0)[:, :-1] * np.diff(rp_grid2, axis=1)[:-1, :]
def lnlike(theta):
    pop = population_model(theta, x_grid, rp_grid2) * new_completeness
    pop = 0.5 * (pop[:-1, :-1] + pop[1:, 1:])
    norm = np.sum(pop * vol)
    #print(norm)
    ll = np.sum(np.log(population_model(theta, koi_x, koi_rps))) - norm
    return ll if np.isfinite(ll) else -np.inf
# The ln-probability function is just proportional to the ln-likelihood
# since we're assuming uniform priors.
bounds = [(-5, 5), (-5, 5), (-5, 5)]
def lnprob(theta):
    # Broad uniform priors.
    for t, rng in zip(theta, bounds):
        if not rng[0] < t < rng[1]:
            return -np.inf
    return lnlike(theta)

# The negative ln-likelihood is useful for optimization.
# Optimizers want to *minimize* your function.
def nll(theta):
    ll = lnlike(theta)
    return -ll if np.isfinite(ll) else 1e15

In [49]:
#%%=========================================================
# Maximum likelihood solution by minimizing negative log-likelihood
from scipy.optimize import minimize
# Initial guess for logF, beta, and alpha
theta_0 = np.array([1, 0.66, -1.5])
#DFM original initial values below
#theta_0 = np.array([np.log(0.75), -0.53218, -1.5])

r = minimize(nll, theta_0, method="L-BFGS-B", bounds=bounds)
print(r)
# r.x is the vector of parameters (logF, beta, and alpha) from the maximum likelihood solution


      fun: 1686.3649695565146
 hess_inv: <3x3 LbfgsInvHessProduct with dtype=float64>
      jac: array([ 0.0003638 ,  0.        , -0.00031832])
  message: 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
     nfev: 56
      nit: 11
   status: 0
  success: True
        x: array([ 0.18685526,  0.06816811, -2.22851729])

In [89]:
#%%=========================================================
# Plot the maximum likelihood solution

# We'll reuse these functions to plot all of our results.

# This function plots the density of samples of a double power law,
# as a function of the x0 input, marginalized over the y input
def make_plot(pop_comp, x0, x, y, ax):
    # pop_comp is a two-dimensional array of completeness values
    # pop_comp.shape is 57(period or insolation) x 61(radius)  
    # x0 is: bin edges of second variable (radius), for model fitting
    # x is rebinning of x0 into a coarser grid for the plots
    # y is: bin edges of first variable (insolation), for model fitting
    # Mid-bin values of the 2D array pop_comp, along the first dimension -- insolation
    pop = 0.5 * (pop_comp[:, 1:] + pop_comp[:, :-1])
    # on first call, pop is 56 x 61

    # Integrate completeness over the first variable, y, period or insolation
    # np.diff(y)[None, :, None] is the parameter space interval in y
    pop = np.sum(pop * np.diff(y)[None, :, None], axis=1)
    # After above command, pop is 1D, collapsed onto the radius dimension
    
    # Credible regions in x: radius
    # x is used only to get the parameter space interval dx in radius for the plot
    # Note: Assumes bin spacing np.diff(x) is uniform
    a, b, c, d, e = np.percentile(pop * np.diff(x)[0], [2.5, 16, 50, 84, 97.5], axis=0)
    # print(c)

    ax.fill_between(x0, a, e, color="k", alpha=0.1, edgecolor="none")
    ax.fill_between(x0, b, d, color="k", alpha=0.3, edgecolor="none")
    # c is the median value of the distribution over 
    ax.plot(x0, c, "k", lw=1)

def plot_results(samples):
    # Loop through the samples and compute the list of population models.
    samples = np.atleast_2d(samples)
    
    # print(samples.shape)
    # print(len(samples))
    # len(samples) is the length of the MCMC chain
    pop = np.empty((len(samples), x_grid.shape[0], x_grid.shape[1]))
    gamma_earth = np.empty((len(samples)))
    for i, p in enumerate(samples):
        # insolation or period _grid and rp_grid2 are meshgrids, 57x61
        # print(p)
        # print(rp_grid2.shape)
        # print(insolation_grid.shape)
        # print(i)
        # power law planet density on period or insolation, radius grid
        pop[i] = population_model(p, x_grid, rp_grid2)
        # planet density at the point corresponding to earth (insolation = 1 and radius = 1)
        #gamma_earth[i] = population_model(p, 1.0, 1.0) * 1.0 365.25
        gamma_earth[i] = population_model(p, 365.25, 1.0) * 365.
        # print(gamma_earth[i])
        
    # Set up 4x4 grid of plots
    fig, axes = pl.subplots(2, 2, figsize=(10, 8))
    fig.subplots_adjust(wspace=0.4, hspace=0.4)
    
    # Histogram of planet radius over a new grid
    # Using a coarser grid for the plot
    dx = 5*(rp2[1] - rp2[0])
    x = np.arange(rp_rng[0], rp_rng[1] + dx, dx)
    n, _ = np.histogram(koi_rps, x)
    
    # Plot the predicted radius distribution against the observed radius distribution
    ax = axes[0, 0]
    
    # Predicted radius distribution
    # rp2 is of length 61
    # nsolation is of length 57
    # pop is 57 x 61 -- so it's insolation x radius
    # new_completeness is 1 x 57 x 61
    make_plot(pop * new_completeness[None, :, :], rp2, x, xLinearGrid, ax)
    
    # Observed radius distribution, with Poisson errors
    ax.errorbar(0.5*(x[:-1]+x[1:]), n, yerr=np.sqrt(n), fmt=".k",
                capsize=0)
    ax.set_xlim(rp_rng[0], rp_rng[1])
    ax.set_xlabel("$R_p\,[R_\oplus]$")
    ax.set_ylabel("# of detected planets")
    
    # Plot the true radius distribution.
    ax = axes[0, 1]
    make_plot(pop, rp2, x, xLinearGrid, ax)
    ax.set_xlim(rp_rng[0], rp_rng[1])
    # ax.set_ylim(0, 0.37)
    ax.set_xlabel("$R_p\,[R_\oplus]$")
    ax.set_ylabel("$\mathrm{d}N / \mathrm{d}R$; $\Delta R = 0.25\,R_\oplus$")
    
    # Histogram of insolation or period over a new grid.
    # Using a coarser grid for the plot
    dx = 5*(xLinearGrid[1] - xLinearGrid[0])
    
    x = np.arange(x_rng[0], x_rng[1] + dx, dx)
    n, _ = np.histogram(koi_x, x)
    
    # Plot the predicted insolation distribution against the observed insolation distribution
    ax = axes[1, 0]
    
    # Predicted insolation or period distribution
    make_plot(np.swapaxes(pop * new_completeness[None, :, :], 1, 2), xLinearGrid, x, rp2, ax)
    
    # Observed insolation or period distribution, with Poisson errors
    ax.errorbar(0.5*(x[:-1]+x[1:]), n, yerr=np.sqrt(n), fmt=".k",
                capsize=0)
    # ax.set_xlim(insolation_rng[0], insolation_rng[1])
    ax.set_xlim(0.5, x_rng[1])
    # ax.set_ylim(0, 25)
    ax.set_ylim(0, 40)
    
    if(periodInsolationSwitch=='I'):
        ax.set_xlabel("insolation, [Earth units]")
    elif(periodInsolationSwitch=='P'):
        ax.set_xlabel("period, [Days]")


    ax.set_ylabel("# of detected planets")
    
    # Plot the true insolation distribution.
    ax = axes[1, 1]
    make_plot(np.swapaxes(pop, 1, 2), xLinearGrid, x, rp2, ax)
    ax.set_xlim(x_rng[0], x_rng[1])
    
    if(periodInsolationSwitch=='I'):
        ax.set_ylabel("$\mathrm{d}N / \mathrm{d}I$; $\Delta I = $")
        ax.set_xlabel("insolation, [Earth units]")

    elif(periodInsolationSwitch=='P'):
        ax.set_ylabel("$\mathrm{d}N / \mathrm{d}P$; $\Delta P = $")
        ax.set_xlabel("period, [Days]")


    return gamma_earth


  File "<ipython-input-89-ad2c62d4d384>", line 52
    gamma_earth[i] = population_model(p, 1.0, 1.0) * 1.0 365.25
                                                              ^
SyntaxError: invalid syntax

In [90]:
#%%=========================================================

# This line calls the plotting machinery above, and returns the maximum likelihood value of gamma_earth
# Note that r.x is the set of parameters [ lnf0, beta, alpha ] returned by the maximum likelihood fit
print(plot_results(r.x));

# Or, try your own values for the parameters
# thetaTry = np.array([.01, -1.2, -1.1])
# print(plot_results(thetaTry));


---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-90-c303ea14207e> in <module>()
      3 # This line calls the plotting machinery above, and returns the maximum likelihood value of gamma_earth
      4 # Note that r.x is the set of parameters [ lnf0, beta, alpha ] returned by the maximum likelihood fit
----> 5 print(plot_results(r.x));
      6 
      7 # Or, try your own values for the parameters

<ipython-input-87-c46166c97f5b> in plot_results(samples)
     39     # print(len(samples))
     40     # len(samples) is the length of the MCMC chain
---> 41     pop = np.empty((len(samples), x_grid.shape[0], x_grid.shape[1]))
     42     gamma_earth = np.empty((len(samples)))
     43     for i, p in enumerate(samples):

IndexError: tuple index out of range

In [52]:
#%%=========================================================
# Sample from the posterior probability distribution for the population parameters using emcee
import emcee

ndim, nwalkers = len(r.x), 16

# Initialize, and set trial vector as Max Likelihood solution perturbed by a small random vector
pos = [r.x + 1e-5 * np.random.randn(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob)

# specify number of samples and number of burn-in samples
nSamplesPerChain = 4000
nBurnIn = nSamplesPerChain/4
nChains = nwalkers

# Burn in.
pos, _, _ = sampler.run_mcmc(pos, nBurnIn)
sampler.reset()

# Production.
pos, _, _ = sampler.run_mcmc(pos, nSamplesPerChain)

In [53]:
#%%=========================================================
# Triangle plot of PDFs using DFMs corner package
import corner
corner.corner(sampler.flatchain, labels=[r"$\ln F$", r"$\beta$", r"$\alpha$"]);



In [54]:
#%%=========================================================
# Plot marginalized posteriors of N and dN/dR
# For Rp, marginalize over P
# For P, marginalize over Rp
# For N, plot also the data and the error bars

# Plot only the last 4000 iterations 
gamma_earth = plot_results(sampler.flatchain[60000:63999,:])



In [55]:
#%%=========================================================
# Posterior distribution of log10 gamma_earth
pl.hist(np.log10(gamma_earth), 50, histtype="step", color="k")
pl.gca().set_yticklabels([])
pl.title("Posterior distribution function of the rate of Earth analogs")
# pl.xlabel(r"$\log_{10}\Gamma_\oplus $");

if(periodInsolationSwitch == 'I'):
    pl.xlabel(r"$\log_{10}\Gamma_\oplus = \left. \log_{10}\mathrm{d}^2 N / \mathrm{d} Insolation \, \mathrm{d} R_p \right |_\oplus$");
elif(periodInsolationSwitch == 'P'):
    pl.xlabel(r"$\log_{10}\Gamma_\oplus = \left. \log_{10}\mathrm{d}^2 N / \mathrm{d} Period \, \mathrm{d} R_p \right |_\oplus$");



In [56]:
#%%=========================================================
# Integrate the planet density over a given range in period or insolation and radius
#   to get the exoplanet occurrence rate predicted by the power law in that region

def integrated_gamma(theta,x1,x2,radius1,radius2,periodInsolationSwitch):
    
    # Break out parameters 
    lnf0, beta, alpha = theta
    
    # Phase space boundaries for our model
    if(periodInsolationSwitch == 'P'):
        x_rng = (20, 300)
    elif(periodInsolationSwitch == 'I'):
        x_rng = (0.2, 20)
            
    radius_rng = (0.75, 2.5)
    
    # Compute exoplanet occurrence rate integrated over chosen region of [ x , radius] parameter space
    integral_over_x = (x2**(beta + 1) - x1**(beta + 1))/(x_rng[1]**(beta + 1) - x_rng[0]**(beta + 1))
    integral_over_radius = (radius2**(alpha + 1) - radius1**(alpha + 1))/(radius_rng[1]**(alpha + 1) - radius_rng[0]**(alpha + 1))
    eta = integral_over_x*integral_over_radius*np.exp(lnf0)
    
    return eta
    

#%%=========================================================
#  Compute and plot the posterior PDF for the exoplanet occurence rate in a 
#   desired region of period, radius parameter space
#   NOTE: using SAG-13 binning scheme

# Initialize   
int_gamma_samples = np.empty(len(sampler.flatchain))


# !!!!! Choose period or insolation bin edges (according to stellar type), for occurrence rate calculation
if(periodInsolationSwitch == 'I'):

    if(stellarTypeSwitch == 'A'):
        # Select A dwarfs
        # !!!!! Using same range as for G dwarfs, for now
        x1 = 0.295
        x2 = 1.824
        
    elif(stellarTypeSwitch == 'F'):
        # Select F dwarfs
        # !!!!! Using same range as for G dwarfs, for now
        x1 = 0.295
        x2 = 1.824

    elif(stellarTypeSwitch == 'G'):
        # Select G dwarfs
        x1 = 0.295
        x2 = 1.824
       
    elif(stellarTypeSwitch == 'K'):
        # Select K dwarfs
        x1 = 0.235
        x2 = 1.681

    elif(stellarTypeSwitch == 'M'):
        # Select M dwarfs
        x1 = 0.205
        x2 = 1.514

elif(periodInsolationSwitch == 'P'):
    
    x1 = 160
    x2 = 320

# !!!!! Choose radius bin edges for occurrence rate calculation
radius1 = 1.0
radius2 = 1.5

# Integrate the gamma posterior distribution over the selected phase space bin
# NOTE -- This gives the exoplanet occurrence rate in that phase space bin!
for i, theta in enumerate(sampler.flatchain):
    int_gamma_samples[i] = integrated_gamma(theta,x1,x2,radius1,radius2,periodInsolationSwitch)

# print occurrence rate for this parameter space region
print("16, 50, 84 percentile range for integrated gamma = {0} ".format(np.percentile(int_gamma_samples,[16,50,84])))

# Plot the posterior of int gamma_samples over selected planet radius and period range             
pl.hist(int_gamma_samples, 100, histtype="step", color="k")
pl.gca().set_yticklabels([])
if(periodInsolationSwitch == 'I'):
    pl.title("Integrated ocurrence rate over radius and insolation")
elif(periodInsolationSwitch == 'I'):
    pl.title("Integrated ocurrence rate over radius and period")
    
pl.xlabel(r"Integrated  $\Gamma$ (i.e. occurrence rate) in parameter space region [x1, x2, radius1, radius2] = {0}".format([x1,x2,radius1,radius2]));    
# pl.xlabel(r"$\log_{10}\Gamma_\oplus = \left. \log_{10}\mathrm{d}N / \mathrm{d}\ln P \, \mathrm{d}\ln R_p \right |_\oplus$");
pl.xticks
pl.gca().set_xlim([0,1])


16, 50, 84 percentile range for integrated gamma = [ 0.21622836  0.25528693  0.29876725] 
Out[56]:
(0, 1)

In [57]:
#=====================================================================
# Gather and save occurrence rate data for emcee sampler
# Could make this a function which takes as inputs
# the posterior samples array 
# the names of the files to save to
# nSamplesPerChain
# nChains
samples = sampler.flatchain

# Define radius and period bin edges
radiusBinEdges = [0.67, 1.0, 1.5, 2.3, 3.4, 5.1, 7.6, 11, 17]
periodBinEdges = [10, 20, 40, 80, 160, 320, 640]

# Initialize dictionary
# occurrenceRateData = dict()

# Initialize lists for median, lower and upper occ rate limits
listMedian = [None]*(len(radiusBinEdges)-1)
listLowerCredibleInterval = [None]*(len(radiusBinEdges)-1)
listUpperCredibleInterval = [None]*(len(radiusBinEdges)-1)

# Loop over desired period and radius bins and compute occurrence rates
for iRadius in range(len(radiusBinEdges)-1):
    # Initialize lists for rows
    listMedianRow = []
    listLowerCredibleIntervalRow = []
    listUpperCredibleIntervalRow = []
    for iPeriod in range(len(periodBinEdges)-1):
        # Integrate the gamma posterior distribution over the selected phase space bin
        # NOTE -- This gives the exoplanet occurrence rate in that phase space bin
        int_gamma_samples = [None]*(len(samples))
        for i, theta in enumerate(samples):
            int_gamma_samples[i] = integrated_gamma(theta,periodBinEdges[iPeriod],periodBinEdges[iPeriod+1],
                                                   radiusBinEdges[iRadius],radiusBinEdges[iRadius+1],periodInsolationSwitch)

        # Print result for this bin
        print("\nRadius Bin "+str(radiusBinEdges[iRadius])+" to "+str(radiusBinEdges[iRadius+1])+" , Period Bin "+str(periodBinEdges[iPeriod])+" to "+str(periodBinEdges[iPeriod+1]))
        print("     Occurrence rate 16, 50, 84 percentile range : {0} ".format(np.percentile(int_gamma_samples,[16,50,84])))

        # Populate the dictionary with the result for this bin
        # binKey = "bin_" + str(radiusBinEdges[iRadius]) + "_" + str(periodBinEdges[iPeriod])
        # occurrenceRateData[binKey] = np.percentile(int_gamma_samples,[16,50,84])
        
        # Accumulate a row vector of median occurrence rates and their upper and lower credible intervals
        # for period bins at this radius
        listMedianRow.append(np.percentile(int_gamma_samples,[50]))
        listLowerCredibleIntervalRow.append(np.percentile(int_gamma_samples,[16]))
        listUpperCredibleIntervalRow.append(np.percentile(int_gamma_samples,[84]))
              
    # Append the row vector of results into the arrays for the median and lower and upper occ rate limits
    listMedian[iRadius] = listMedianRow
    listLowerCredibleInterval[iRadius] = listLowerCredibleIntervalRow
    listUpperCredibleInterval[iRadius] = listUpperCredibleIntervalRow
        
# Save the dictionary
#with open('occurrence_rate_data_'+stellarTypeSwitch+'_'+catalogSwitch+'.pkl', 'wb') as handle:
#    pickle.dump(occurrenceRateData, handle)
    
# Save the .csv files
np.savetxt('eta_emcee_occurrence_rate_median_'+str(nSamplesPerChain)+'_samples_per_chain_'+str(nChains)+'_chains_'+stellarTypeSwitch+'_'+catalogSwitch+'.csv', np.array(listMedian), delimiter=",",fmt='%.5f')
np.savetxt('sigma_n_emcee_occurrence_rate_lower_credible_interval_'+str(nSamplesPerChain)+'_samples_per_chain_'+str(nChains)+'_chains_'+stellarTypeSwitch+'_'+catalogSwitch+'.csv', np.array(listLowerCredibleInterval), delimiter=",",fmt='%.5f')
np.savetxt('sigma_p_emcee_occurrence_rate_upper_credible_interval_'+str(nSamplesPerChain)+'_samples_per_chain_'+str(nChains)+'_chains_'+stellarTypeSwitch+'_'+catalogSwitch+'.csv', np.array(listUpperCredibleInterval), delimiter=",",fmt='%.5f')


Radius Bin 0.67 to 1.0 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.01697913  0.02111839  0.02625583] 

Radius Bin 0.67 to 1.0 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.03640067  0.04422942  0.05413453] 

Radius Bin 0.67 to 1.0 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.07629623  0.09294173  0.11336225] 

Radius Bin 0.67 to 1.0 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.1577428   0.19563236  0.24164823] 

Radius Bin 0.67 to 1.0 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.32175702  0.41167435  0.52333415] 

Radius Bin 0.67 to 1.0 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.649626    0.86589543  1.14307557] 

Radius Bin 1.0 to 1.5 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.01111463  0.01307856  0.0153168 ] 

Radius Bin 1.0 to 1.5 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.02425207  0.02748976  0.03102484] 

Radius Bin 1.0 to 1.5 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.05167245  0.05770078  0.0641743 ] 

Radius Bin 1.0 to 1.5 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.10685762  0.12125273  0.13689892] 

Radius Bin 1.0 to 1.5 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.21622836  0.25528693  0.29876725] 

Radius Bin 1.0 to 1.5 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.43265022  0.53721108  0.65897545] 

Radius Bin 1.5 to 2.3 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.00708462  0.00830386  0.00971974] 

Radius Bin 1.5 to 2.3 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.01564677  0.01745694  0.01939998] 

Radius Bin 1.5 to 2.3 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.03406951  0.03663532  0.0393611 ] 

Radius Bin 1.5 to 2.3 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.070993    0.07702375  0.08331605] 

Radius Bin 1.5 to 2.3 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.14288983  0.16213862  0.18227489] 

Radius Bin 1.5 to 2.3 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.28483678  0.34071207  0.40403671] 

Radius Bin 2.3 to 3.4 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.00371142  0.00460795  0.00570869] 

Radius Bin 2.3 to 3.4 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.00813227  0.00968417  0.01146079] 

Radius Bin 2.3 to 3.4 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.017589    0.02031637  0.02339145] 

Radius Bin 2.3 to 3.4 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.03724666  0.04264854  0.04896167] 

Radius Bin 2.3 to 3.4 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.07659893  0.08978363  0.10469952] 

Radius Bin 2.3 to 3.4 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.15503191  0.18887448  0.22805749] 

Radius Bin 3.4 to 5.1 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.00219214  0.00294664  0.00393817] 

Radius Bin 3.4 to 5.1 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.00476625  0.00618782  0.00798151] 

Radius Bin 3.4 to 5.1 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.01025204  0.0129957   0.0163895 ] 

Radius Bin 3.4 to 5.1 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.02176981  0.02726825  0.03414328] 

Radius Bin 3.4 to 5.1 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.04546943  0.05734058  0.07215963] 

Radius Bin 3.4 to 5.1 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.09322541  0.12061432  0.15470761] 

Radius Bin 5.1 to 7.6 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.00120847  0.00177301  0.0025961 ] 

Radius Bin 5.1 to 7.6 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.00260622  0.00372696  0.00529264] 

Radius Bin 5.1 to 7.6 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.00559885  0.00783354  0.01090449] 

Radius Bin 5.1 to 7.6 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.01189234  0.01642107  0.02267473] 

Radius Bin 5.1 to 7.6 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.02500666  0.03456023  0.04765733] 

Radius Bin 5.1 to 7.6 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.05185818  0.07271959  0.1011588 ] 

Radius Bin 7.6 to 11 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.00063775  0.00102656  0.00164285] 

Radius Bin 7.6 to 11 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.00137396  0.00216058  0.00337221] 

Radius Bin 7.6 to 11 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.00294349  0.00453781  0.00695053] 

Radius Bin 7.6 to 11 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.0062579   0.00953527  0.01443831] 

Radius Bin 7.6 to 11 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.01319979  0.02003272  0.03019156] 

Radius Bin 7.6 to 11 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.02757808  0.04214948  0.0639021 ] 

Radius Bin 11 to 17 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.00041808  0.00074423  0.00130949] 

Radius Bin 11 to 17 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.00089879  0.00156513  0.00268973] 

Radius Bin 11 to 17 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.00192914  0.00328552  0.00554846] 

Radius Bin 11 to 17 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.0041006   0.00690116  0.01153589] 

Radius Bin 11 to 17 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.00866177  0.01450183  0.024038  ] 

Radius Bin 11 to 17 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.01815342  0.03052886  0.05071257] 

Part 2 of notebook: Calculate Planet Occurrence Rates using PyStan

Begin to incorporate uncertainty. Starting with including uncertainty in Planet Radius after precomputing completeness.


In [58]:
import pystan
import corner
import numpy as np
import gc
np.set_printoptions(threshold='nan')
import matplotlib as plt

#%%===============================================================================================
# Pre-compute some values to be used in the Stan model

# change to column major for PyStan version (overwrite old version here)

if(periodInsolationSwitch == "P"):
    # Period grid
    x_grid, rp_grid2 = np.meshgrid(period, rp2, indexing="xy")

elif(periodInsolationSwitch == "I"):
    # Construct grid for insolation
    x_grid, rp_grid2 = np.meshgrid(insolation, rp2, indexing="xy")

# Column names for RMS CDPP
cdpp_cols = [k for k in stlr.keys() if k.startswith("rrmscdpp")]

# Values of RMS CDPP corresponding the the 14 pulse durations for all selected stars (column)
# cdpp_obs = np.array(star[cdpp_cols], dtype=float) # <- for one row (one star) change star for stlr.
cdpp_obs_all = np.array(stlr[cdpp_cols], dtype=float)

# Values for the 14 pulse durations derived from the column names
pulse_durations_obs = np.array([k[-4:].replace("p", ".") for k in mesthres_cols],dtype=float)
nPulses = len(pulse_durations_obs)

# Column names for the 14 MES thresholds
mesthres_cols = [k for k in stlr.keys() if k.startswith("mesthres")]

# MES threshold values corresponding to the 14 pulse durations
mesthres_obs_all = np.array(stlr[mesthres_cols],dtype=float)
#print(mesthres_obs_all)                        
# mest <- np.interp(tau, pulse_durations_obs, mesthres_obs, dtype=float)
# x <- mes - 4.1 - (mest - 7.1);

if(periodInsolationSwitch == "P"):
    # Period grid
    x_rng = (20, 300)
    x_grid = np.linspace(x_rng[0], x_rng[1], 57)
    nxGrid = len(x_grid)

elif(periodInsolationSwitch == "I"):
    # Insolation grid
    x_rng = (0.2, 20)
    x_grid = np.linspace(x_rng[0], x_rng[1], 57)
    nxGrid = len(x_grid)

# Planet radius grid
planet_radius_rng = (0.75, 2.5)
planet_radius_grid = np.linspace(planet_radius_rng[0], planet_radius_rng[1], 61)
nRadiusGrid = len(planet_radius_grid)

# Make 2D meshgrids for bin edges.
planet_radius_grid_vol, x_grid_vol = np.meshgrid(planet_radius_grid, x_grid, indexing="xy")

# 2D grid of insolation-radius parameter space bin volumes at bin centers
vol = np.diff(x_grid_vol, axis=0)[:, :-1] * np.diff(planet_radius_grid_vol, axis=1)[:-1, :]
volShape = list(vol.shape)

# Stellar parameters (for all stars in the stellar catalog)
teffStar = stlr.teff
teffSun = 5777
rStar = stlr.radius
mStar = stlr.mass

# number of stars in the stellar catalog:
nStar = len(stlr)

# Insolation and radius for planets in the planet catalog
if(periodInsolationSwitch == "P"):
    koi_x = np.array(kois.koi_period)
elif(periodInsolationSwitch == "I"):
    koi_x = np.array(kois.koi_insol)

# Catalog radii of observed planets
koi_rps = np.array(kois.koi_prad)

# Catalog radius errors of observed planets
# Replace zeros of koi_rps_err1 with the minimum nonzero value of koi_rps_err1
koi_rps_err1 = np.array(kois.koi_prad_err1)
ind1Zero = np.where(koi_rps_err1 == 0)[0]
ind1NonZero = np.where(koi_rps_err1 != 0)[0]
koi_rps_err1[ind1Zero] = np.min(koi_rps_err1[ind1NonZero])

# Replace zeros of koi_rps_err2 with the negative minimum nonzero abs value of koi_rps_err2
koi_rps_err2 = np.array(kois.koi_prad_err2)
ind2Zero = np.where(koi_rps_err2 == 0)[0]
ind2NonZero = np.where(koi_rps_err2 != 0)[0]
koi_rps_err2[ind2Zero] = -np.min(np.abs(koi_rps_err2[ind1NonZero]))

# Number of planets selected from the KOI catalog
nKois = len(kois)

# Eccentricity is set to zero for now.
e = 0.0

# Dataspan is length of time between first and last observations of each star
star_dataspan = stlr.dataspan

# Dutycycle is the fraction of cadences with valid observations
star_dutycycle = stlr.dutycycle

# Upper and lower limits for uniform distribution
koi_rps_obs_lower = koi_rps + koi_rps_err2
koi_rps_obs_upper = koi_rps + koi_rps_err1

# Garbage collection
gc.isenabled()
gc.collect()


Out[58]:
32086

In [59]:
# Check that everything is there
print(new_completeness.shape)
print(cdpp_cols)
print(len(cdpp_obs_all))
print(len(pulse_durations_obs))
print(mesthres_cols)
print(len(mesthres_obs_all))
print(nRadiusGrid)
print(nxGrid)
#print(nPeriodGrid)
#print(nInsolationGrid)
print(planet_radius_grid_vol.shape)
print(x_grid_vol.shape)
print(vol.shape)
print(nStar)
print(len(koi_x))
#print(len(koi_insolation))
print(len(koi_rps))
print(len(koi_rps_err1))
print(len(koi_rps_err2))
print(nKois)
print(planet_radius_rng)
#print(insolation_rng)
print(x_rng)
print(x_grid.shape)
#print(insolation_grid.shape)
print(koi_rps_obs_lower)
print(np.hstack((koi_rps.reshape(-1,1) + koi_rps_err2.reshape(-1,1), koi_rps.reshape(-1,1), koi_rps.reshape(-1,1) + koi_rps_err1.reshape(-1,1))))
rp_err_down = koi_rps.reshape(-1,1) + koi_rps_err2.reshape(-1,1)
rp_err_up = koi_rps.reshape(-1,1) + koi_rps_err1.reshape(-1,1)
print(koi_rps_err1)
print(koi_rps_err2)


(57, 61)
['rrmscdpp01p5', 'rrmscdpp02p0', 'rrmscdpp02p5', 'rrmscdpp03p0', 'rrmscdpp03p5', 'rrmscdpp04p5', 'rrmscdpp05p0', 'rrmscdpp06p0', 'rrmscdpp07p5', 'rrmscdpp09p0', 'rrmscdpp10p5', 'rrmscdpp12p0', 'rrmscdpp12p5', 'rrmscdpp15p0']
96543
14
['mesthres01p5', 'mesthres02p0', 'mesthres02p5', 'mesthres03p0', 'mesthres03p5', 'mesthres04p5', 'mesthres05p0', 'mesthres06p0', 'mesthres07p5', 'mesthres09p0', 'mesthres10p5', 'mesthres12p0', 'mesthres12p5', 'mesthres15p0']
96543
61
57
(57, 61)
(57, 61)
(56, 60)
96543
223
223
223
223
223
(0.75, 2.5)
(20, 300)
(57,)
[ 1.44  1.96  1.43  1.97  2.03  1.59  1.49  1.61  1.34  2.06  2.32  1.54
  2.07  2.08  2.15  2.06  1.69  1.8   0.99  1.65  1.92  1.56  2.24  2.01
  1.64  2.31  1.27  1.93  0.96  1.96  1.54  1.87  2.11  1.54  1.41  1.83
  1.19  2.25  2.06  1.94  2.13  0.78  1.37  2.06  1.11  1.31  1.91  1.69
  1.76  1.69  1.61  1.66  1.03  1.84  1.86  1.71  2.05  1.78  1.77  1.27
  1.93  2.08  1.71  1.68  2.23  1.77  1.7   1.36  1.29  1.89  2.04  1.64
  2.33  1.01  1.33  2.3   1.32  0.8   2.22  1.64  2.27  2.    2.08  2.1
  1.82  1.47  1.25  1.75  1.61  1.44  1.59  2.13  1.62  1.8   1.42  0.97
  1.98  1.43  2.07  1.08  1.05  1.24  1.48  0.98  2.03  1.49  1.4   1.11
  2.01  1.31  2.12  1.66  1.82  1.84  1.78  2.12  2.16  2.07  1.51  1.54
  1.86  1.05  1.7   1.83  1.52  1.57  1.85  1.07  1.38  1.41  2.04  1.07
  1.13  1.53  1.16  1.91  1.63  1.73  1.04  1.91  1.39  0.76  1.81  2.04
  0.77  1.44  2.06  1.84  1.37  0.84  0.99  1.2   1.85  0.86  1.68  1.7
  1.11  1.82  1.37  1.12  1.41  1.48  0.86  1.35  1.51  1.32  1.83  1.97
  1.79  0.73  1.42  1.27  1.17  1.17  1.46  1.26  0.78  1.91  1.82  1.14
  1.37  1.8   1.24  1.38  1.67  2.25  1.1   2.14  1.39  2.    1.    1.49
  0.93  1.84  1.2   0.86  1.6   2.19  1.74  0.99  0.83  1.71  1.72  1.7
  1.79  1.3   1.14  1.17  1.44  1.45  1.38  1.8   1.33  0.66  2.12  1.2
  2.13  1.65  1.15  2.09  1.49  1.64  1.34]
[[ 1.44  1.54  1.62]
 [ 1.96  2.13  3.07]
 [ 1.43  1.48  1.61]
 [ 1.97  2.15  3.06]
 [ 2.03  2.19  2.45]
 [ 1.59  1.73  2.44]
 [ 1.49  1.61  2.12]
 [ 1.61  1.74  2.39]
 [ 1.34  1.45  1.99]
 [ 2.06  2.26  2.49]
 [ 2.32  2.39  2.54]
 [ 1.54  1.74  2.08]
 [ 2.07  2.27  3.22]
 [ 2.08  2.2   2.29]
 [ 2.15  2.38  3.44]
 [ 2.06  2.23  3.18]
 [ 1.69  1.82  2.47]
 [ 1.8   2.01  2.35]
 [ 0.99  1.02  1.05]
 [ 1.65  1.89  2.21]
 [ 1.92  2.08  2.79]
 [ 1.56  1.67  2.04]
 [ 2.24  2.42  3.37]
 [ 2.01  2.21  3.06]
 [ 1.64  1.84  2.65]
 [ 2.31  2.5   3.62]
 [ 1.27  1.47  2.06]
 [ 1.93  2.09  2.99]
 [ 0.96  0.98  1.  ]
 [ 1.96  2.15  2.39]
 [ 1.54  1.69  1.88]
 [ 1.87  2.01  2.4 ]
 [ 2.11  2.29  2.55]
 [ 1.54  1.69  2.34]
 [ 1.41  1.54  2.14]
 [ 1.83  2.11  3.  ]
 [ 1.19  1.31  1.53]
 [ 2.25  2.45  3.31]
 [ 2.06  2.24  3.03]
 [ 1.94  2.12  2.27]
 [ 2.13  2.49  3.45]
 [ 0.78  0.85  1.24]
 [ 1.37  1.53  2.18]
 [ 2.06  2.22  3.  ]
 [ 1.11  1.15  1.22]
 [ 1.31  1.36  1.44]
 [ 1.91  2.06  2.48]
 [ 1.69  1.84  2.58]
 [ 1.76  1.89  2.15]
 [ 1.69  1.99  2.68]
 [ 1.61  1.79  2.51]
 [ 1.66  1.84  2.53]
 [ 1.03  1.25  1.69]
 [ 1.84  2.01  2.87]
 [ 1.86  2.12  2.88]
 [ 1.71  1.95  2.66]
 [ 2.05  2.21  2.61]
 [ 1.78  1.92  2.43]
 [ 1.77  2.06  2.43]
 [ 1.27  1.48  1.75]
 [ 1.93  2.13  2.35]
 [ 2.08  2.41  3.45]
 [ 1.71  1.85  2.14]
 [ 1.68  1.84  2.64]
 [ 2.23  2.45  3.52]
 [ 1.77  1.99  2.78]
 [ 1.7   1.9   2.04]
 [ 1.36  1.44  1.6 ]
 [ 1.29  1.39  1.91]
 [ 1.89  2.16  3.06]
 [ 2.04  2.22  2.92]
 [ 1.64  1.79  2.35]
 [ 2.33  2.4   2.48]
 [ 1.01  1.14  1.34]
 [ 1.33  1.37  1.41]
 [ 2.3   2.5   3.66]
 [ 1.32  1.43  1.68]
 [ 0.8   0.86  1.01]
 [ 2.22  2.43  2.67]
 [ 1.64  1.78  2.08]
 [ 2.27  2.46  3.34]
 [ 2.    2.19  2.42]
 [ 2.08  2.31  2.52]
 [ 2.1   2.27  2.59]
 [ 1.82  1.96  2.32]
 [ 1.47  1.58  1.86]
 [ 1.25  1.36  1.87]
 [ 1.75  1.92  2.74]
 [ 1.61  1.77  2.13]
 [ 1.44  1.56  1.76]
 [ 1.59  1.72  1.94]
 [ 2.13  2.35  3.2 ]
 [ 1.62  1.75  2.39]
 [ 1.8   1.96  2.81]
 [ 1.42  1.58  4.27]
 [ 0.97  1.05  1.24]
 [ 1.98  2.14  2.58]
 [ 1.43  1.56  2.2 ]
 [ 2.07  2.27  3.26]
 [ 1.08  1.19  1.71]
 [ 1.05  1.13  1.64]
 [ 1.24  1.37  1.49]
 [ 1.48  1.63  2.3 ]
 [ 0.98  1.13  1.55]
 [ 2.03  2.19  2.73]
 [ 1.49  1.62  2.25]
 [ 1.4   1.55  1.7 ]
 [ 1.11  1.16  1.2 ]
 [ 2.01  2.13  2.47]
 [ 1.31  1.46  2.03]
 [ 2.12  2.5   3.  ]
 [ 1.66  1.83  6.56]
 [ 1.82  1.97  2.81]
 [ 1.84  1.99  2.61]
 [ 1.78  1.96  2.76]
 [ 2.12  2.31  3.14]
 [ 2.16  2.33  3.16]
 [ 2.07  2.3   2.47]
 [ 1.51  1.66  2.37]
 [ 1.54  1.83  2.56]
 [ 1.86  2.03  2.99]
 [ 1.05  1.13  1.52]
 [ 1.7   1.91  2.07]
 [ 1.83  1.98  2.7 ]
 [ 1.52  1.7   2.49]
 [ 1.57  1.73  1.91]
 [ 1.85  1.98  2.83]
 [ 1.07  1.15  1.37]
 [ 1.38  1.52  1.7 ]
 [ 1.41  1.56  1.7 ]
 [ 2.04  2.2   2.45]
 [ 1.07  1.18  1.28]
 [ 1.13  1.31  1.81]
 [ 1.53  1.65  2.31]
 [ 1.16  1.3   1.81]
 [ 1.91  2.09  2.79]
 [ 1.63  1.85  2.56]
 [ 1.73  1.93  2.07]
 [ 1.04  1.32  1.78]
 [ 1.91  2.07  2.85]
 [ 1.39  1.69  2.28]
 [ 0.76  0.83  1.2 ]
 [ 1.81  1.96  2.79]
 [ 2.04  2.2   3.07]
 [ 0.77  0.85  1.03]
 [ 1.44  1.61  2.39]
 [ 2.06  2.39  3.26]
 [ 1.84  2.03  2.87]
 [ 1.37  1.48  2.13]
 [ 0.84  0.96  1.43]
 [ 0.99  1.06  1.31]
 [ 1.2   1.35  1.98]
 [ 1.85  2.08  3.03]
 [ 0.86  1.2   1.73]
 [ 1.68  1.84  2.51]
 [ 1.7   1.94  2.85]
 [ 1.11  1.26  1.81]
 [ 1.82  2.    2.95]
 [ 1.37  1.56  2.24]
 [ 1.12  1.51  2.04]
 [ 1.41  1.54  2.22]
 [ 1.48  1.63  2.2 ]
 [ 0.86  0.94  1.07]
 [ 1.35  1.64  2.35]
 [ 1.51  1.69  2.35]
 [ 1.32  1.43  6.84]
 [ 1.83  2.02  2.28]
 [ 1.97  2.17  2.98]
 [ 1.79  2.    2.84]
 [ 0.73  0.78  0.92]
 [ 1.42  1.57  4.2 ]
 [ 1.27  1.33  1.38]
 [ 1.17  1.26  1.69]
 [ 1.17  1.31  2.09]
 [ 1.46  1.73  2.41]
 [ 1.26  1.41  1.96]
 [ 0.78  0.84  1.09]
 [ 1.91  2.12  3.21]
 [ 1.82  2.02  2.83]
 [ 1.14  1.24  1.4 ]
 [ 1.37  1.93  2.89]
 [ 1.8   1.94  2.23]
 [ 1.24  1.34  1.91]
 [ 1.38  1.54  2.09]
 [ 1.67  1.85  2.75]
 [ 2.25  2.49  2.87]
 [ 1.1   1.24  1.85]
 [ 2.14  2.31  2.63]
 [ 1.39  1.46  1.52]
 [ 2.    2.1   2.19]
 [ 1.    1.13  1.55]
 [ 1.49  1.7   2.46]
 [ 0.93  1.    1.25]
 [ 1.84  2.15  2.92]
 [ 1.2   1.32  1.9 ]
 [ 0.86  0.94  1.33]
 [ 1.6   1.74  2.45]
 [ 2.19  2.42  2.65]
 [ 1.74  2.21  3.12]
 [ 0.99  1.62  2.49]
 [ 0.83  0.9   1.33]
 [ 1.71  1.85  2.08]
 [ 1.72  1.87  2.67]
 [ 1.7   1.84  2.52]
 [ 1.79  1.96  2.77]
 [ 1.3   1.41  1.88]
 [ 1.14  1.28  1.75]
 [ 1.17  1.27  1.81]
 [ 1.44  1.55  2.06]
 [ 1.45  2.05  2.83]
 [ 1.38  1.53  1.61]
 [ 1.8   1.97  2.81]
 [ 1.33  1.46  1.97]
 [ 0.66  0.76  1.34]
 [ 2.12  2.41  3.31]
 [ 1.2   1.35  1.89]
 [ 2.13  2.17  2.22]
 [ 1.65  1.8   2.62]
 [ 1.15  1.29  1.84]
 [ 2.09  2.45  2.91]
 [ 1.49  1.75  2.08]
 [ 1.64  1.73  1.81]
 [ 1.34  1.42  1.48]]
[ 0.08  0.94  0.13  0.91  0.26  0.71  0.51  0.65  0.54  0.23  0.15  0.34
  0.95  0.09  1.06  0.95  0.65  0.34  0.03  0.32  0.71  0.37  0.95  0.85
  0.81  1.12  0.59  0.9   0.02  0.24  0.19  0.39  0.26  0.65  0.6   0.89
  0.22  0.86  0.79  0.15  0.96  0.39  0.65  0.78  0.07  0.08  0.42  0.74
  0.26  0.69  0.72  0.69  0.44  0.86  0.76  0.71  0.4   0.51  0.37  0.27
  0.22  1.04  0.29  0.8   1.07  0.79  0.14  0.16  0.52  0.9   0.7   0.56
  0.08  0.2   0.04  1.16  0.25  0.15  0.24  0.3   0.88  0.23  0.21  0.32
  0.36  0.28  0.51  0.82  0.36  0.2   0.22  0.85  0.64  0.85  2.69  0.19
  0.44  0.64  0.99  0.52  0.51  0.12  0.67  0.42  0.54  0.63  0.15  0.04
  0.34  0.57  0.5   4.73  0.84  0.62  0.8   0.83  0.83  0.17  0.71  0.73
  0.96  0.39  0.16  0.72  0.79  0.18  0.85  0.22  0.18  0.14  0.25  0.1
  0.5   0.66  0.51  0.7   0.71  0.14  0.46  0.78  0.59  0.37  0.83  0.87
  0.18  0.78  0.87  0.84  0.65  0.47  0.25  0.63  0.95  0.53  0.67  0.91
  0.55  0.95  0.68  0.53  0.68  0.57  0.13  0.71  0.66  5.41  0.26  0.81
  0.84  0.14  2.63  0.05  0.43  0.78  0.68  0.55  0.25  1.09  0.81  0.16
  0.96  0.29  0.57  0.55  0.9   0.38  0.61  0.32  0.06  0.09  0.42  0.76
  0.25  0.77  0.58  0.39  0.71  0.23  0.91  0.87  0.43  0.23  0.8   0.68
  0.81  0.47  0.47  0.54  0.51  0.78  0.08  0.84  0.51  0.58  0.9   0.54
  0.05  0.82  0.55  0.46  0.33  0.08  0.06]
[-0.1  -0.17 -0.05 -0.18 -0.16 -0.14 -0.12 -0.13 -0.11 -0.2  -0.07 -0.2
 -0.2  -0.12 -0.23 -0.17 -0.13 -0.21 -0.03 -0.24 -0.16 -0.11 -0.18 -0.2
 -0.2  -0.19 -0.2  -0.16 -0.02 -0.19 -0.15 -0.14 -0.18 -0.15 -0.13 -0.28
 -0.12 -0.2  -0.18 -0.18 -0.36 -0.07 -0.16 -0.16 -0.04 -0.05 -0.15 -0.15
 -0.13 -0.3  -0.18 -0.18 -0.22 -0.17 -0.26 -0.24 -0.16 -0.14 -0.29 -0.21
 -0.2  -0.33 -0.14 -0.16 -0.22 -0.22 -0.2  -0.08 -0.1  -0.27 -0.18 -0.15
 -0.07 -0.13 -0.04 -0.2  -0.11 -0.06 -0.21 -0.14 -0.19 -0.19 -0.23 -0.17
 -0.14 -0.11 -0.11 -0.17 -0.16 -0.12 -0.13 -0.22 -0.13 -0.16 -0.16 -0.08
 -0.16 -0.13 -0.2  -0.11 -0.08 -0.13 -0.15 -0.15 -0.16 -0.13 -0.15 -0.05
 -0.12 -0.15 -0.38 -0.17 -0.15 -0.15 -0.18 -0.19 -0.17 -0.23 -0.15 -0.29
 -0.17 -0.08 -0.21 -0.15 -0.18 -0.16 -0.13 -0.08 -0.14 -0.15 -0.16 -0.11
 -0.18 -0.12 -0.14 -0.18 -0.22 -0.2  -0.28 -0.16 -0.3  -0.07 -0.15 -0.16
 -0.08 -0.17 -0.33 -0.19 -0.11 -0.12 -0.07 -0.15 -0.23 -0.34 -0.16 -0.24
 -0.15 -0.18 -0.19 -0.39 -0.13 -0.15 -0.08 -0.29 -0.18 -0.11 -0.19 -0.2
 -0.21 -0.05 -0.15 -0.06 -0.09 -0.14 -0.27 -0.15 -0.06 -0.21 -0.2  -0.1
 -0.56 -0.14 -0.1  -0.16 -0.18 -0.24 -0.14 -0.17 -0.07 -0.1  -0.13 -0.21
 -0.07 -0.31 -0.12 -0.08 -0.14 -0.23 -0.47 -0.63 -0.07 -0.14 -0.15 -0.14
 -0.17 -0.11 -0.14 -0.1  -0.11 -0.6  -0.15 -0.17 -0.13 -0.1  -0.29 -0.15
 -0.04 -0.15 -0.14 -0.36 -0.26 -0.09 -0.08]

In [60]:
from pystan import StanModel

stan_model_occ = """

functions {

real[,] population_model_insolation(int nxGrid, int nRadiusGrid, 
real lnf0, real alpha, real beta, real[] x_grid, real[] planet_radius_grid, 
real[] planet_radius_rng, real[] x_rng) {

/* Population inference with an independent power law model
   Using modified code above that computes 
   completeness in the parameter space of [period or insolation , planet radius ], 

   A double power law model for the population.

   :param nxGrid: size of insolation grid
   :param nRadiusGrid: size of radius grid
   :param lnf0: normalization, 
   :param beta: exponent of insolation power law,
   :param alpha: exponent of radius power law
   :param x_grid:       //1D grid
   :param planet_radius_grid:    //1D grid
   :param x_rng: // lower and upper boundary values of insolation grid
   :param radius_rng:     // lower and upper boundary values of radius grid
   
*/    

// Declare internal variables
   real powerLawNumberDensity[nRadiusGrid,nxGrid];

// Loop over phase space bins, compute number density in each bin, 
//      from the power law using the input lnf0, alpha and beta parameters

    for (j in 1:nxGrid) 
       for (i in 1:nRadiusGrid) {

       powerLawNumberDensity[i,j] <- ( exp( lnf0 ) * ( alpha+1 ) * ( beta+1 ) * ( planet_radius_grid[i]^alpha ) * ( x_grid[j]^beta ) ) / ( ( planet_radius_rng[2]^( alpha+1 ) - ( planet_radius_rng[1]^( alpha+1 ) ) ) * ( ( x_rng[2]^( beta+1 ) ) - ( x_rng[1]^( beta+1 ) ) ) );
     }
   return powerLawNumberDensity;
}

real lnlike(real alpha, real beta, real lnf0, int nKois, real[] planet_radius_rng, 
real[] x_rng, real[,] vol, real[] x_grid, real[] planet_radius_grid, 
real[] koi_x, real[] koi_rps_true, int nRadiusGrid, int nxGrid, real e, real[,] new_completeness ) {

/* log likelihood function

    :param alpha:
    :param beta:
    :param lnf0:
    :param nKois
    :param planet_radius_rng
    :param x_rng
    :param vol:
    :param x_grid:
    :param planet_radius_grid:
    :param koi_x:
    :param koi_rps_true:
    :param nRadiusGrid
    :param nxGrid
    :param e
    :param new_completeness
        
*/

// Declare internal variables

    real pop[nRadiusGrid,nxGrid];
    real pop1[nRadiusGrid,nxGrid];
    real pop2[nRadiusGrid,nxGrid];
    real populationDensity[nRadiusGrid-1,nxGrid-1];
    real sumExpectedCounts;
    real logPop[nKois];
    real lnLikelihood;
    real sumLogPop;
    real lnLikelihood0;
    real popModelInsol0[nRadiusGrid,nxGrid];
    real popModelInsol[nKois,nKois];
    
     
    // Compute the population model for a grid over [ period or insolation, radius ] parameter space
    popModelInsol0 <- population_model_insolation(nxGrid, nRadiusGrid, lnf0, alpha, beta, x_grid, planet_radius_grid, planet_radius_rng, x_rng);

    // Compute the expected density of detections per bin in the [ period or insolation, radius ] parameter space,
    // based on the population model and the completeness
        for (i in 1:nRadiusGrid)
            for (j in 1: nxGrid) {

            //sum_completeness_0[i,j] <- new_completeness[j,i];
             pop[i,j] <- popModelInsol0[i,j] * new_completeness[j,i];

        }
        
    // Initialize
    sumExpectedCounts <- 0.0; 
    sumLogPop <-0.0;
    
    for (i in 1:nRadiusGrid-1)
      for (j in 1: nxGrid-1) {

      
         // Compute the number density of detections on a grid of bin centers
         
         // Figured out the proper syntax for translation of the Python line below into Stan
         // pop <- 0.5*(pop[:-1, :-1] + pop[1:, 1:]) is python syntax from DFM notebook.
         // Note that indices start with 1 in Stan
         pop1[i,j] <- pop[i,j];
         pop2[i,j] <- pop[i+1,j+1];
         populationDensity[i,j] <- 0.5*(pop1[i,j]+pop2[i,j]);
         
         // Integrate the planet counts over the phase space volume
         sumExpectedCounts <- sumExpectedCounts + (populationDensity[i,j]*vol[j,i]);
      }
   
   // Population model evaluated over the list of detected planets
   // 
   popModelInsol <- population_model_insolation(nKois, nKois, lnf0, alpha, beta, koi_x, koi_rps_true, planet_radius_rng, x_rng);

    for (k in 1:nKois) {
        logPop[k] <- log( popModelInsol[k,k] );
        sumLogPop <- sumLogPop + logPop[k];
    }
    
    // Combine the two terms to form the log-likelihood function for the Inhomogeneous Poisson Process 
    lnLikelihood <- sumLogPop - sumExpectedCounts;
    
    // Catch error: if lnLikelihood has bad value, return a very small number
    lnLikelihood0 <- if_else( is_inf(lnLikelihood) || is_nan(lnLikelihood) , -1e15, lnLikelihood );
    
   return lnLikelihood0;
   
} 

}
// end function block


data {

// Declare precomputed variables as data

int<lower=1> nKois;

int<lower=1> nRadiusGrid;

int<lower=1> nxGrid;

real x_rng[2]; 

real planet_radius_rng[2];

real x_grid[nxGrid];

real planet_radius_grid[nRadiusGrid];

real vol[nxGrid-1,nRadiusGrid-1];

real koi_x[nKois];

real koi_rps_obs[nKois];

real koi_rps_err1[nKois];

real koi_rps_err2[nKois];

real koi_rps_obs_lower[nKois];

real koi_rps_obs_upper[nKois];

real e;

real new_completeness[nxGrid,nRadiusGrid];


}


parameters {

// For v1: only compute population level parameter posteriors.
    real<lower=-5,upper=5> alpha; 
    real<lower=-5,upper=5> beta;
    real<lower=-5,upper=5> lnf0;

# Ad hoc lower bound constraint on planet radius
    //real<lower=0.5> koi_rps_true[nKois];
    real<lower=0.0> koi_rps_true[nKois];
}

transformed parameters {

    
    //real<lower=-0.3> koi_rps_true_log10[nKois];
    //real<lower=0.0> ave_log10_sigma[nKois];
    

}
model {


    // Hyperpriors
    alpha ~ uniform(-5,5);
    beta ~ uniform(-5,5);
    lnf0 ~ uniform(-5,5);

    // Sample using lnLikelihood function
    increment_log_prob(lnlike(alpha, beta, lnf0, nKois, planet_radius_rng, x_rng, vol, x_grid, planet_radius_grid, koi_x, koi_rps_true, nRadiusGrid, nxGrid, e, new_completeness));

    for (n in 1:nKois)
        // first test symetric measurement uncertainty here:
        koi_rps_obs[n] ~ normal(koi_rps_true[n], koi_rps_err2[n]);
   
}


"""

# Compiled Stan Model
sm = StanModel(model_code=stan_model_occ)

In [71]:
'''data = {'new_completeness':new_completeness, 'star_dutycycle':star_dutycycle, 'star_dataspan':star_dataspan, 
        'nStar':nStar, 'nKois':nKois,'nPulses':nPulses,'nRadiusGrid':nRadiusGrid, 'nxGrid':nxGrid, 
        'cdpp_obs_all':cdpp_obs_all, 'mesthres_obs_all':mesthres_obs_all, 'pulse_durations_obs':pulse_durations_obs, 
        'x_rng':x_rng, 'planet_radius_rng':planet_radius_rng,
        'x_grid':x_grid, 'planet_radius_grid':planet_radius_grid, 
        'vol':vol, 'teffStar':teffStar,'teffSun':teffSun, 'rStar':rStar, 'mStar':mStar,
        'koi_x':koi_x, 'koi_rps':koi_rps, 'e':e}
'''
data = {'new_completeness':new_completeness, 'nKois':nKois,'nRadiusGrid':nRadiusGrid, 'nxGrid':nxGrid, 
        'x_rng':x_rng, 'planet_radius_rng':planet_radius_rng,'x_grid':x_grid, 
        'planet_radius_grid':planet_radius_grid, 
        'vol':vol,'koi_x':koi_x, 'koi_rps_obs':koi_rps, 'koi_rps_err1':koi_rps_err1, 'koi_rps_err2':abs(koi_rps_err2),'e':e,
        'koi_rps_obs_lower':koi_rps_obs_lower,'koi_rps_obs_upper':koi_rps_obs_upper}

# Could set initial value to Max Likelihood solution without radius errors
#init = [{'lnf0':0.66,'alpha':-1.82,'beta':-0.65}]
#init = [{'lnf0':-0.1,'alpha':-2.2,'beta':-0.5}]

## TO DO: initialize planet radius arrays

# Set total number of samples per chain and number of chains
# number of production samples is half of the total number of samples;
# the other half are warmup samples.
nTotalPerChain = 30000
nSamplesPerChain = int(round(nTotalPerChain/2))
nChains = 5
fit = sm.sampling(data=data, iter=nTotalPerChain, chains=nChains, n_jobs=-1)
#fit = sm.sampling(data=data, iter=2000, chains=5, init=init, n_jobs=-1)
#fit = pystan.stan(model_code=stan_model_occ, data=data, iter=100, chains=2, n_jobs=-1, verbose=True);

#get_inits(fit)

# Return a dictionary of arrays of posterior samples
la = fit.extract(permuted=True)  
alpha = la['alpha']
beta = la['beta']
lnf0 = la['lnf0']

a = fit.extract(permuted=False)
print(fit)


//anaconda/lib/python2.7/multiprocessing/queues.py:392: UserWarning: Pickling fit objects is an experimental feature!
The relevant StanModel instance must be pickled along with this fit object.
When unpickling the StanModel must be unpickled first.
  return send(obj)
//anaconda/lib/python2.7/multiprocessing/queues.py:392: UserWarning: Pickling fit objects is an experimental feature!
The relevant StanModel instance must be pickled along with this fit object.
When unpickling the StanModel must be unpickled first.
  return send(obj)
//anaconda/lib/python2.7/multiprocessing/queues.py:392: UserWarning: Pickling fit objects is an experimental feature!
The relevant StanModel instance must be pickled along with this fit object.
When unpickling the StanModel must be unpickled first.
  return send(obj)
//anaconda/lib/python2.7/multiprocessing/queues.py:392: UserWarning: Pickling fit objects is an experimental feature!
The relevant StanModel instance must be pickled along with this fit object.
When unpickling the StanModel must be unpickled first.
  return send(obj)
Inference for Stan model: anon_model_6cb8c79e1dab10b13f015b6198ed9710.
5 chains, each with iter=30000; warmup=15000; thin=1; 
post-warmup draws per chain=15000, total post-warmup draws=75000.

                    mean se_mean     sd   2.5%    25%    50%    75%  97.5%  n_eff   Rhat
alpha               -5.0  1.7e-5 4.5e-3   -5.0   -5.0   -5.0   -5.0  -4.98  75000   4.29
beta                0.15  3.3e-4   0.09 5.5e-3   0.06   0.13   0.23   0.31  75000   2.13
lnf0                1.23  4.1e-4   0.11   1.03   1.14   1.24   1.33   1.41  75000   2.45
koi_rps_true[0]     1.51  5.4e-4   0.15   1.22   1.47   1.54   1.63   1.72  75000   4.13
koi_rps_true[1]     2.03  2.8e-4   0.08   1.92   1.98   2.02   2.06   2.22  75000   1.47
koi_rps_true[2]     1.47  1.6e-4   0.04   1.39   1.43   1.46    1.5   1.56  75000   1.37
koi_rps_true[3]      2.2  5.8e-4   0.16   1.84   2.11   2.21   2.31   2.47  75000   1.81
koi_rps_true[4]      2.2  6.9e-4   0.19   1.81   2.15    2.2   2.32   2.56  75000   2.97
koi_rps_true[5]     1.74  3.9e-4   0.11   1.57   1.66   1.74   1.83   1.95  75000    1.7
koi_rps_true[6]     1.54  3.2e-4   0.09   1.39   1.46   1.53   1.61   1.68  75000   1.77
koi_rps_true[7]     1.71  4.1e-4   0.11   1.48   1.63   1.73    1.8   1.88  75000   1.52
koi_rps_true[8]     1.46  3.9e-4   0.11   1.28   1.38   1.42   1.55   1.65  75000    2.2
koi_rps_true[9]     2.05  5.0e-4   0.14   1.82   1.92   2.07   2.13   2.31  75000   2.09
koi_rps_true[10]    2.37  1.9e-4   0.05   2.28   2.34   2.37   2.41    2.5  75000    1.5
koi_rps_true[11]    1.58  7.4e-4    0.2   1.18   1.49   1.63   1.72   1.91  75000   3.82
koi_rps_true[12]    2.07  6.3e-4   0.17   1.81   1.91   2.07   2.21   2.39  75000   2.11
koi_rps_true[13]    2.13  5.4e-4   0.15   1.89   1.99   2.16   2.26   2.35  75000    2.1
koi_rps_true[14]    2.22  6.0e-4   0.16   2.01   2.09   2.17   2.35   2.54  75000   2.55
koi_rps_true[15]    2.21  5.6e-4   0.15   1.94   2.12   2.23   2.32   2.48  75000   2.36
koi_rps_true[16]    1.79  2.5e-4   0.07   1.67   1.74   1.79   1.83   1.93  75000   1.29
koi_rps_true[17]    2.01  1.0e-3   0.28    1.6   1.76    1.9   2.32   2.36  75000   4.68
koi_rps_true[18]    1.01  8.1e-5   0.02   0.97    1.0   1.01   1.02   1.07  75000   1.25
koi_rps_true[19]    1.75  1.0e-3   0.28   1.41   1.47   1.71   1.98   2.31  75000   2.72
koi_rps_true[20]    2.02  6.9e-4   0.19   1.64   1.95   2.07   2.15   2.27  75000   3.48
koi_rps_true[21]    1.64  4.1e-4   0.11   1.44   1.54   1.67   1.74    1.8  75000   2.27
koi_rps_true[22]    2.41  3.7e-4    0.1   2.18   2.34   2.42   2.49   2.57  75000   1.55
koi_rps_true[23]    2.13  5.7e-4   0.16   1.87    2.0   2.14   2.27   2.39  75000   1.98
koi_rps_true[24]    1.68  9.4e-4   0.26    1.3   1.48   1.59   1.97   2.12  75000   2.81
koi_rps_true[25]    2.45  8.7e-4   0.24   2.14   2.23   2.43    2.6   2.88  75000   2.87
koi_rps_true[26]    1.28  5.7e-4   0.16   1.05   1.12   1.33   1.41   1.55  75000   2.18
koi_rps_true[27]    1.99  4.9e-4   0.14   1.73   1.92    2.0   2.09   2.21  75000   3.57
koi_rps_true[28]    0.97  7.0e-5   0.02   0.94   0.96   0.97   0.99   1.01  75000   1.26
koi_rps_true[29]    1.86  6.3e-4   0.17   1.57   1.79   1.86   1.93   2.26  75000   2.57
koi_rps_true[30]    1.64  4.2e-4   0.12   1.42   1.56   1.63   1.73   1.87  75000   1.87
koi_rps_true[31]    1.91  4.4e-4   0.12   1.67   1.82   1.94   2.01   2.09  75000    2.1
koi_rps_true[32]    2.11  4.8e-4   0.13   1.87   2.01   2.12   2.19   2.39  75000   1.44
koi_rps_true[33]    1.59  4.2e-4   0.11   1.35   1.53   1.59   1.67   1.78  75000    1.7
koi_rps_true[34]    1.51  5.2e-4   0.14   1.27   1.39   1.49   1.61    1.8  75000   2.14
koi_rps_true[35]    1.44  2.9e-3   0.794.2e-62   1.31   1.65    1.9   2.71  75000   7.72
koi_rps_true[36]    1.27  5.4e-4   0.15   1.05   1.18   1.25   1.34   1.63  75000    1.8
koi_rps_true[37]    2.41  5.1e-4   0.14   2.15    2.3   2.41   2.52   2.66  75000   1.59
koi_rps_true[38]    2.14  6.2e-4   0.17   1.91   1.98   2.12   2.29   2.45  75000   2.02
koi_rps_true[39]     2.2  6.3e-4   0.17    1.9   2.07   2.23   2.31   2.57  75000    2.6
koi_rps_true[40]    2.42  1.4e-3   0.39   1.68   2.12    2.5   2.69   2.99  75000   2.15
koi_rps_true[41]    0.78  3.7e-4    0.1   0.59   0.76    0.8   0.85   0.94  75000   2.86
koi_rps_true[42]    1.44  6.4e-4   0.18    1.1    1.3   1.45    1.6   1.72  75000   2.55
koi_rps_true[43]    2.32  8.4e-4   0.23    1.9   2.11   2.37   2.46   2.74  75000   2.97
koi_rps_true[44]    1.14  1.4e-4   0.04   1.06   1.12   1.14   1.16   1.22  75000   1.37
koi_rps_true[45]    1.36  1.6e-4   0.05   1.25   1.33   1.37    1.4   1.43  75000   1.57
koi_rps_true[46]     2.0  4.5e-4   0.12   1.78   1.91   2.01    2.1   2.19  75000   2.68
koi_rps_true[47]    1.76  4.3e-4   0.12   1.51    1.7   1.76   1.84   2.01  75000   1.32
koi_rps_true[48]    1.87  3.9e-4   0.11   1.69    1.8   1.87   1.93   2.07  75000   1.33
koi_rps_true[49]    1.69  1.3e-3   0.36   1.04   1.39   1.77   1.97   2.24  75000    3.8
koi_rps_true[50]    1.83  7.9e-4   0.22   1.42   1.67   1.76   1.95   2.32  75000   2.15
koi_rps_true[51]    1.68  5.6e-4   0.15   1.45   1.55   1.69   1.77   2.05  75000   2.76
koi_rps_true[52]    0.98  1.1e-3   0.31   0.39   0.89   0.94   1.07   1.61  75000   6.69
koi_rps_true[53]    1.95  4.4e-4   0.12   1.57    1.9   1.97   2.02   2.12  75000   1.26
koi_rps_true[54]    1.93  1.3e-3   0.35    1.2   1.68   2.04   2.23   2.38  75000   3.77
koi_rps_true[55]    1.84  4.1e-4   0.11   1.64   1.76   1.82   1.92   2.09  75000   1.33
koi_rps_true[56]    2.06  4.5e-4   0.12   1.84   1.98   2.05   2.12   2.34  75000   1.97
koi_rps_true[57]    1.83  4.0e-4   0.11   1.66   1.75   1.81   1.87   2.15  75000   1.62
koi_rps_true[58]     1.8  6.4e-4   0.17   1.46   1.67   1.86   1.92   2.11  75000   2.99
koi_rps_true[59]    0.97  1.8e-3    0.53.8e-62   1.09   1.16   1.27   1.43  75000    9.7
koi_rps_true[60]    2.03  7.2e-4    0.2   1.52   1.96   2.07   2.15    2.3  75000   1.98
koi_rps_true[61]    2.31  1.1e-3   0.29   1.79   2.11   2.31   2.56    2.8  75000   2.01
koi_rps_true[62]     1.7  2.8e-4   0.08   1.59   1.64    1.7   1.74    1.9  75000   1.48
koi_rps_true[63]    1.75  3.7e-4    0.1   1.59   1.66   1.76   1.83   1.97  75000   1.54
koi_rps_true[64]    2.36  6.8e-4   0.19   1.98   2.21   2.39   2.52   2.66  75000   3.63
koi_rps_true[65]    1.77  8.4e-4   0.23   1.41   1.53   1.83   1.97   2.11  75000   2.16
koi_rps_true[66]    1.76  1.0e-3   0.28   1.19   1.68   1.81   1.95   2.18  75000   4.55
koi_rps_true[67]    1.41  3.2e-4   0.09   1.29   1.35   1.37   1.51   1.58  75000   1.95
koi_rps_true[68]    1.37  3.6e-4    0.1   1.17   1.29    1.4   1.45   1.52  75000   2.41
koi_rps_true[69]    2.19  1.2e-3   0.32   1.75    1.9   2.11   2.52   2.67  75000   3.44
koi_rps_true[70]    2.23  4.6e-4   0.13   2.04   2.13   2.22   2.31   2.57  75000   1.53
koi_rps_true[71]    1.64  5.5e-4   0.15   1.34   1.53   1.63   1.77   1.92  75000   2.13
koi_rps_true[72]    2.42  2.0e-4   0.05    2.3   2.39   2.41   2.46   2.51  75000   1.22
koi_rps_true[73]    1.06  4.1e-4   0.11   0.86    1.0   1.06   1.11   1.28  75000   2.75
koi_rps_true[74]    1.37  1.4e-4   0.04   1.29   1.34   1.37    1.4   1.44  75000   1.54
koi_rps_true[75]    2.38  7.3e-4    0.2   2.03   2.22   2.37   2.55   2.71  75000   2.26
koi_rps_true[76]     1.4  3.2e-4   0.09   1.22   1.35   1.39   1.45   1.59  75000    1.3
koi_rps_true[77]    0.88  1.5e-4   0.04   0.82   0.84   0.87    0.9   0.97  75000   2.11
koi_rps_true[78]    2.28  3.9e-4   0.11   2.08   2.22   2.28   2.35   2.51  75000   1.87
koi_rps_true[79]    1.79  5.2e-4   0.14   1.57   1.67   1.78   1.92   2.02  75000   2.15
koi_rps_true[80]    2.45  7.4e-4    0.2   2.21   2.27   2.37   2.63   2.86  75000   1.92
koi_rps_true[81]     2.1  5.8e-4   0.16   1.86   1.96   2.08   2.25   2.38  75000   1.88
koi_rps_true[82]    2.14  5.7e-4   0.16   1.77   2.04   2.17   2.26    2.4  75000   2.06
koi_rps_true[83]    2.25  5.6e-4   0.15   1.94   2.16   2.25   2.38   2.49  75000    1.8
koi_rps_true[84]    1.87  3.0e-4   0.08   1.66   1.83   1.87   1.92    2.0  75000   1.93
koi_rps_true[85]    1.52  3.7e-4    0.1   1.34   1.44   1.53   1.61   1.72  75000   2.25
koi_rps_true[86]    1.35  3.7e-4    0.1   1.22   1.26   1.34    1.4    1.6  75000   2.38
koi_rps_true[87]    1.82  7.9e-4   0.22    1.4   1.76   1.91   1.98   2.08  75000   4.49
koi_rps_true[88]    1.68  3.1e-4   0.08   1.54   1.61   1.68   1.74   1.84  75000   2.35
koi_rps_true[89]     1.5  3.2e-4   0.09   1.38   1.44   1.48   1.55    1.7  75000    1.4
koi_rps_true[90]    1.67  4.6e-4   0.13   1.47   1.58   1.68   1.75   1.91  75000   2.62
koi_rps_true[91]    2.35  1.2e-3   0.33   1.72   2.06   2.49   2.62   2.72  75000   3.55
koi_rps_true[92]    1.72  5.8e-4   0.16   1.47   1.58    1.7   1.83    2.0  75000   2.17
koi_rps_true[93]    1.92  4.2e-4   0.11    1.7   1.84   1.95   2.01   2.13  75000   1.33
koi_rps_true[94]    1.44  7.1e-4    0.2   1.08   1.28   1.49   1.59   1.72  75000    2.6
koi_rps_true[95]    1.03  2.1e-4   0.06   0.93   0.99   1.02   1.06   1.17  75000   1.75
koi_rps_true[96]    2.03  4.6e-4   0.13   1.74   1.96   2.06    2.1   2.22  75000   2.28
koi_rps_true[97]    1.49  4.0e-4   0.11   1.34   1.42   1.45   1.56   1.73  75000   2.06
koi_rps_true[98]    2.21  5.6e-4   0.15   1.95   2.09   2.18   2.29   2.57  75000   1.84
koi_rps_true[99]     1.1  3.8e-4    0.1   0.85   1.06   1.13   1.16   1.29  75000   1.87
koi_rps_true[100]    1.1  3.0e-4   0.08   0.94   1.07    1.1   1.17   1.22  75000   1.62
koi_rps_true[101]   1.29  5.8e-4   0.16   1.13   1.16    1.2   1.42   1.61  75000   4.85
koi_rps_true[102]   1.59  5.9e-4   0.16    1.2   1.48   1.63   1.71   1.87  75000   2.18
koi_rps_true[103]   1.04  4.3e-4   0.12    0.9   0.94    1.0   1.13   1.31  75000   1.94
koi_rps_true[104]   2.09  6.5e-4   0.18   1.71   1.99   2.14   2.21   2.35  75000   2.37
koi_rps_true[105]   1.59  4.7e-4   0.13   1.41   1.49   1.57   1.68    1.9  75000   2.04
koi_rps_true[106]   1.48  5.3e-4   0.14   1.26   1.37   1.47   1.62   1.72  75000   4.08
koi_rps_true[107]   1.16  1.5e-4   0.04   1.08   1.13   1.15   1.19   1.24  75000   1.21
koi_rps_true[108]   2.13  3.4e-4   0.09   1.89   2.09   2.14   2.19   2.32  75000   1.34
koi_rps_true[109]   1.36  5.8e-4   0.16   1.06   1.33   1.41   1.46   1.59  75000   2.59
koi_rps_true[110]   2.09  1.4e-3   0.37   1.48    1.8   2.01   2.47   2.59  75000   4.01
koi_rps_true[111]   1.57  7.1e-4   0.19   1.28   1.38   1.55   1.74   1.91  75000   3.08
koi_rps_true[112]   1.91  1.0e-3   0.29   1.32   1.82   1.99   2.14   2.26  75000   3.71
koi_rps_true[113]   1.97  6.7e-4   0.18   1.72   1.79   1.97   2.13   2.28  75000    2.7
koi_rps_true[114]   1.92  7.6e-4   0.21   1.56   1.74   1.97   2.09   2.21  75000   2.37
koi_rps_true[115]   2.17  5.6e-4   0.15   1.94   2.03   2.16   2.28   2.49  75000   2.18
koi_rps_true[116]    2.3  2.9e-4   0.08   2.13   2.23   2.31   2.36   2.43  75000   1.64
koi_rps_true[117]   2.13  8.6e-4   0.24   1.74   1.95   2.05   2.35   2.55  75000   2.47
koi_rps_true[118]   1.63  5.4e-4   0.15    1.4   1.52    1.6   1.71   1.98  75000   2.62
koi_rps_true[119]   1.55  7.4e-4    0.2   1.14   1.48   1.54   1.65   1.96  75000   2.49
koi_rps_true[120]   1.87  5.5e-4   0.15   1.57   1.74    1.9   2.01   2.08  75000   3.32
koi_rps_true[121]    1.1  3.3e-4   0.09   1.01   1.05   1.06   1.12    1.3  75000   3.46
koi_rps_true[122]   1.85  5.2e-4   0.14   1.59   1.74   1.87   1.94   2.13  75000    1.4
koi_rps_true[123]   1.89  4.9e-4   0.13   1.65   1.78    1.9   1.97   2.17  75000   1.92
koi_rps_true[124]    1.7  4.7e-4   0.13   1.51   1.59    1.7   1.78   2.01  75000   1.69
koi_rps_true[125]   1.59  7.2e-4    0.2   1.27   1.41   1.61   1.78   1.85  75000   2.83
koi_rps_true[126]   1.84  6.0e-4   0.16   1.62   1.69   1.82   1.99   2.16  75000   3.35
koi_rps_true[127]   1.09  2.3e-4   0.06   1.01   1.05   1.07   1.13   1.24  75000   1.53
koi_rps_true[128]   1.49  5.4e-4   0.15   1.27   1.34    1.5   1.59   1.81  75000   2.46
koi_rps_true[129]    1.5  5.2e-4   0.14   1.26   1.42    1.5   1.58   1.83  75000    2.2
koi_rps_true[130]    2.2  5.7e-4   0.16   1.84   2.12   2.22    2.3   2.46  75000   2.18
koi_rps_true[131]   1.14  3.4e-4   0.09   0.96   1.08   1.11    1.2   1.32  75000   1.28
koi_rps_true[132]   1.17  5.5e-4   0.15   0.92   1.08   1.14   1.25   1.45  75000   2.63
koi_rps_true[133]   1.63  5.3e-4   0.15   1.39   1.52   1.64   1.73    1.9  75000   2.05
koi_rps_true[134]   1.32  4.1e-4   0.11   1.14   1.25    1.3   1.35   1.63  75000   2.47
koi_rps_true[135]   2.14  6.0e-4   0.17   1.82   2.02   2.11   2.26   2.42  75000   2.69
koi_rps_true[136]   1.63  7.8e-4   0.21   1.26   1.43   1.69   1.83   1.94  75000   2.63
koi_rps_true[137]    2.0  7.8e-4   0.21   1.63   1.84   1.99   2.09    2.4  75000   3.94
koi_rps_true[138] 8.8e-3  6.2e-5   0.025.0e-622.1e-39 2.2e-9 1.0e-3   0.04  75000    nan
koi_rps_true[139]   2.06  4.7e-4   0.13   1.77   1.97   2.08   2.14   2.28  75000   1.23
koi_rps_true[140]    0.9  2.8e-3   0.764.5e-623.3e-27   1.24   1.52   1.93  75000   17.9
koi_rps_true[141]   0.78  2.9e-4   0.08   0.62   0.72   0.77   0.85   0.91  75000   2.01
koi_rps_true[142]   1.84  5.9e-4   0.16    1.4   1.81   1.85    1.9   2.19  75000   2.89
koi_rps_true[143]   2.18  7.2e-4    0.2   1.79    2.0   2.23   2.35   2.46  75000   2.45
koi_rps_true[144]   0.82  2.9e-4   0.08   0.65   0.77   0.84   0.88   0.93  75000   2.27
koi_rps_true[145]   1.47  3.9e-4   0.11   1.32   1.39   1.47   1.51   1.74  75000   1.39
koi_rps_true[146]   1.99  1.2e-3   0.33   1.36   1.87   2.09   2.18   2.55  75000   1.93
koi_rps_true[147]   1.95  4.7e-4   0.13    1.7   1.85    2.0   2.06   2.13  75000   2.14
koi_rps_true[148]   1.47  4.0e-4   0.11   1.22   1.41   1.49   1.56   1.62  75000    1.8
koi_rps_true[149]   0.89  4.2e-4   0.12   0.74    0.8   0.85   0.99   1.14  75000   2.62
koi_rps_true[150]   1.02  3.0e-4   0.08   0.85   0.97   1.02   1.07   1.16  75000   2.68
koi_rps_true[151]   1.26  3.3e-4   0.09   1.09   1.19   1.26   1.32   1.42  75000   2.27
koi_rps_true[152]   2.07  7.7e-4   0.21   1.73   1.87   2.08   2.26   2.44  75000   2.62
koi_rps_true[153]   0.03  2.2e-4   0.062.9e-624.5e-626.3e-621.6e-13   0.16  75000  12.84
koi_rps_true[154]   1.86  5.6e-4   0.15   1.48    1.8   1.85   1.98   2.08  75000    2.6
koi_rps_true[155]   1.83  1.1e-3   0.31   1.27   1.51   1.85   2.01    2.4  75000   2.45
koi_rps_true[156]    1.2  5.5e-4   0.15   0.95   1.06   1.21   1.32   1.43  75000   2.83
koi_rps_true[157]   2.06  6.1e-4   0.17   1.76   1.92   2.08   2.18   2.33  75000   2.31
koi_rps_true[158]    1.4  7.3e-4    0.2   1.03   1.22   1.39   1.59    1.7  75000    2.8
koi_rps_true[159] 1.7e-8 4.2e-10 1.2e-74.1e-626.6e-627.3e-621.2e-39 1.3e-7  75000    nan
koi_rps_true[160]   1.51  4.2e-4   0.11   1.34    1.4   1.51   1.58   1.76  75000   2.31
koi_rps_true[161]    1.6  3.1e-4   0.09   1.36   1.57   1.61   1.65   1.73  75000   1.57
koi_rps_true[162]    0.9  4.1e-4   0.11   0.73   0.84   0.91   0.99    1.1  75000   4.25
koi_rps_true[163]   1.21  1.2e-3   0.32   0.62   0.82   1.25   1.49   1.66  75000   2.89
koi_rps_true[164]   1.61  7.3e-4    0.2   1.26   1.46   1.65   1.79   1.88  75000   2.04
koi_rps_true[165]   1.36  3.4e-4   0.09   1.24   1.29   1.35   1.42   1.58  75000   2.47
koi_rps_true[166]   1.97  6.9e-4   0.19   1.57   1.84   2.01    2.1   2.26  75000   1.95
koi_rps_true[167]   2.09  4.4e-4   0.12   1.86    2.0    2.1   2.17   2.35  75000   1.27
koi_rps_true[168]   1.81  5.7e-4   0.16   1.61   1.69   1.79   1.86    2.3  75000    1.5
koi_rps_true[169]   0.77  1.7e-4   0.05   0.65   0.74   0.76    0.8   0.85  75000    2.0
koi_rps_true[170]   1.52  6.0e-4   0.16   1.16   1.41   1.55   1.65   1.77  75000   1.37
koi_rps_true[171]   1.36  2.2e-4   0.06   1.23   1.33   1.37   1.41   1.46  75000   1.62
koi_rps_true[172]    1.2  2.4e-4   0.07   1.06   1.15   1.21   1.26   1.29  75000   2.02
koi_rps_true[173]   1.32  5.3e-4   0.14   1.07   1.22   1.27   1.43   1.64  75000   2.52
koi_rps_true[174]   1.34  2.6e-3   0.713.9e-62   1.43   1.51   1.67   2.26  75000   10.7
koi_rps_true[175]   1.35  4.8e-4   0.13   1.01   1.26   1.34   1.45   1.55  75000   1.95
koi_rps_true[176]   0.83  1.4e-4   0.04   0.74   0.81   0.83   0.85   0.89  75000   1.62
koi_rps_true[177]   1.87  6.0e-4   0.17   1.67   1.74   1.81   1.99   2.21  75000   1.76
koi_rps_true[178]   1.91  5.1e-4   0.14   1.61   1.84   1.91   2.01   2.17  75000   1.26
koi_rps_true[179]   1.18  3.1e-4   0.08   0.98   1.11   1.19   1.23   1.31  75000   1.89
koi_rps_true[180]5.3e-62 6.5e-651.8e-623.0e-624.2e-625.2e-626.3e-629.0e-62  75000    nan
koi_rps_true[181]   1.97  4.0e-4   0.11   1.75   1.88   1.99   2.05   2.16  75000    1.6
koi_rps_true[182]   1.34  2.8e-4   0.08   1.19   1.28   1.35   1.39   1.48  75000   1.44
koi_rps_true[183]   1.48  6.9e-4   0.19   1.14   1.41   1.53    1.6   1.79  75000   3.55
koi_rps_true[184]   1.85  5.1e-4   0.14   1.55   1.76   1.86   1.95   2.08  75000   1.46
koi_rps_true[185]   2.27  8.7e-4   0.24   1.68   2.14   2.29   2.47   2.58  75000   3.16
koi_rps_true[186]   1.26  7.2e-4    0.2   1.04   1.07   1.22   1.38   1.66  75000   3.91
koi_rps_true[187]   2.18  4.4e-4   0.12   1.97   2.08   2.17   2.27   2.41  75000   1.66
koi_rps_true[188]   1.42  2.2e-4   0.06   1.33   1.37   1.42   1.47   1.55  75000   2.53
koi_rps_true[189]   2.11  3.2e-4   0.09   1.97   2.05   2.11   2.19   2.27  75000   1.77
koi_rps_true[190]    1.0  4.3e-4   0.12    0.8   0.89   1.02   1.09   1.17  75000   3.02
koi_rps_true[191]   1.67  7.9e-4   0.22   1.35   1.51   1.63   1.85   2.05  75000   2.71
koi_rps_true[192]   0.99  2.3e-4   0.06   0.88   0.94   0.98   1.05   1.09  75000   2.12
koi_rps_true[193]   2.01  5.1e-4   0.14   1.81   1.92    2.0   2.05   2.41  75000   1.36
koi_rps_true[194]   1.28  3.0e-4   0.08   1.12   1.22   1.29   1.34   1.42  75000   1.76
koi_rps_true[195]   0.87  2.9e-4   0.08   0.74    0.8   0.85   0.92   1.03  75000   2.74
koi_rps_true[196]   1.71  4.7e-4   0.13    1.5    1.6   1.69   1.79   1.97  75000   1.98
koi_rps_true[197]   2.48  5.5e-4   0.15   2.16   2.38   2.47   2.58   2.76  75000   1.82
koi_rps_true[198]5.4e-62 6.8e-651.9e-623.0e-624.3e-625.1e-626.1e-629.7e-62  75000    nan
koi_rps_true[199]5.4e-62 7.6e-652.1e-623.1e-624.2e-625.2e-626.0e-629.9e-62  75000    nan
koi_rps_true[200]   0.86  2.3e-4   0.06   0.72   0.82   0.87    0.9   0.97  75000   2.23
koi_rps_true[201]   1.81  4.3e-4   0.12   1.57   1.74   1.81   1.87   2.01  75000    1.4
koi_rps_true[202]   1.73  5.8e-4   0.16    1.5   1.59   1.73   1.84   2.06  75000    2.9
koi_rps_true[203]   1.79  6.0e-4   0.16    1.5   1.65   1.78    1.9   2.09  75000   3.22
koi_rps_true[204]   1.93  4.2e-4   0.11   1.71   1.85   1.93    2.0   2.17  75000   1.62
koi_rps_true[205]   1.31  4.8e-4   0.13   1.08    1.2   1.33   1.39   1.57  75000   1.92
koi_rps_true[206]   1.22  5.1e-4   0.14   0.97   1.09   1.24   1.33   1.45  75000   3.67
koi_rps_true[207]   1.21  3.0e-4   0.08   1.07   1.17   1.21   1.26    1.4  75000   1.44
koi_rps_true[208]   1.58  5.1e-4   0.14   1.38   1.46   1.56   1.67   1.86  75000   2.48
koi_rps_true[209]2.8e-37 2.1e-385.7e-363.0e-624.1e-625.1e-626.0e-621.7e-42  75000    nan
koi_rps_true[210]   1.45  4.9e-4   0.13   1.15   1.38   1.46   1.57   1.65  75000    2.3
koi_rps_true[211]    1.7  4.9e-4   0.13   1.43   1.61   1.69   1.79    2.0  75000    2.1
koi_rps_true[212]   1.38  5.4e-4   0.15   1.14   1.22    1.4   1.49   1.65  75000   2.93
koi_rps_true[213]   0.73  4.0e-4   0.11   0.55   0.66   0.75   0.82   0.91  75000   3.56
koi_rps_true[214]   1.98  6.5e-4   0.18   1.66   1.85   1.97   2.11   2.42  75000   1.69
koi_rps_true[215]   1.28  3.3e-4   0.09   1.08   1.25   1.29   1.33   1.49  75000   1.28
koi_rps_true[216]   2.17  1.4e-4   0.04   2.09   2.14   2.17    2.2   2.24  75000   1.22
koi_rps_true[217]   1.66  4.3e-4   0.12   1.43   1.58   1.67   1.74    1.9  75000   1.68
koi_rps_true[218]   1.17  3.1e-4   0.09   1.01   1.12   1.16   1.21   1.38  75000   1.69
koi_rps_true[219]   1.68  3.2e-3   0.872.1e-58   1.71   1.99   2.27   2.62  75000   9.85
koi_rps_true[220]   1.09  2.3e-3   0.624.5e-62   1.06   1.12   1.37   2.01  75000  23.54
koi_rps_true[221]   1.72  3.3e-4   0.09   1.55   1.64   1.73   1.79   1.86  75000   2.01
koi_rps_true[222]   1.41  2.8e-4   0.08   1.27   1.34   1.41   1.48   1.53  75000   1.76
lp__              2167.8    3.91 1070.8 1045.3 1150.6 2064.0 2671.6 4002.0  75000  34.99

Samples were drawn using NUTS(diag_e) at Thu Nov 16 03:28:58 2017.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

In [ ]:
type(la)
type(a)
la.keys

In [72]:
# Pack the posterior samples into a numpy array with the same structure as sampler.flatchain
posteriorSamples = np.asarray( np.hstack(( lnf0.reshape(-1,1)  , beta.reshape(-1,1) , alpha.reshape(-1,1)) ) )

In [73]:
import corner
import scipy as spy

# Make corner plots
corner.corner(np.hstack((lnf0.reshape(-1,1), beta.reshape(-1,1), alpha.reshape(-1,1))), labels=[r"$\ln F$", r"$\beta$", r"$\alpha$"]);

# Examine results
[np.median(alpha), np.median(beta), np.median(lnf0)]
alphaMode = spy.stats.mode(alpha)
betaMode = spy.stats.mode(beta)
lnf0Mode = spy.stats.mode(lnf0)
print(alphaMode)
print(betaMode)
print(lnf0Mode)
print([np.median(alpha), np.median(beta), np.median(lnf0)])


ModeResult(mode=array([-4.9994373]), count=array([17]))
ModeResult(mode=array([ 0.20960064]), count=array([17]))
ModeResult(mode=array([ 1.32005594]), count=array([17]))
[-4.9967390645136209, 0.12924164318475118, 1.2369516979690389]

In [74]:
# Plot posterior distributions and trends for parameter samples
fig = fit.plot()



In [79]:
import stan_utility_copy
## I found Python stan_utility on Jeff Alstott's github: https://github.com/jeffalstott/pystan_time_series
## I am refereing to the Stan Case Studies for HMC diagnostics in pystan:
## http://mc-stan.org/users/documentation/case-studies/pystan_workflow.html

help(stan_utility_copy)


Help on module stan_utility_copy:

NAME
    stan_utility_copy

FILE
    /Users/meganshabram/Dropbox/NASA_Postdoctoral_Program_Fellowship/Stan_Kepler_Populations/stan_utility_copy.py

DESCRIPTION
    #Copyright 2017 Columbia University, 2017 Jeff Alstott
    #
    #Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
    #
    #1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
    #
    #2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
    #
    #3. Neither the name of the copyright holder nor the name INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR IABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

FUNCTIONS
    check_div(fit)
        Check transitions that ended with a divergence
    
    check_energy(fit)
        Checks the energy Bayesian fraction of missing information (E-BFMI)
    
    check_treedepth(fit, max_depth=10)
        Check transitions that ended prematurely due to maximum tree depth limit
    
    compile_model(filename, model_name=None, **kwargs)
        This will automatically cache models - great if you're just running a
        script on the command line.
        
        See http://pystan.readthedocs.io/en/latest/avoiding_recompilation.html
    
    partition_div(fit)
        Returns parameter arrays separated into divergent and non-divergent transitions



In [105]:
stan_utility_copy.check_treedepth(fit)
#stan_utility_copy.check_energy(fit)
stan_utility_copy.check_div(fit)


"0 of 75000 iterations saturated the maximum tree depth of 10 (0%)"
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-105-954c3ea28de2> in <module>()
      1 stan_utility_copy.check_treedepth(fit)
      2 #stan_utility_copy.check_energy(fit)
----> 3 stan_utility_copy.check_div(fit)

/Users/meganshabram/Dropbox/NASA_Postdoctoral_Program_Fellowship/Stan_Kepler_Populations/stan_utility_copy.py in check_div(fit)
     16     """Check transitions that ended with a divergence"""
     17     sampler_params = fit.get_sampler_params(inc_warmup=False)
---> 18     divergent = [x for y in sampler_params for x in y['divergent__']]
     19     n = sum(divergent)
     20     N = len(divergent)

KeyError: 'divergent__'

In [83]:
'''
# Archive or retrieve data with pickle
import pickle

# Stan data to be archived
extract = fit.extract()

# Pickle the data
with open('pystan_results_'+str(nSamplesPerChain)+'_samples_per_chain_'+str(nChains)+'_chains_'+stellarTypeSwitch+'_'+catalogSwitch+'.pkl', 'wb') as handle:
    pickle.dump(extract,handle)
    
# Retrieve pickled data
# with open('pystan_results.pkl', 'rb') as handle:
#      retrieved_results = pickle.load(handle)
#
'''


Out[83]:
"\n# Archive or retrieve data with pickle\nimport pickle\n\n# Stan data to be archived\nextract = fit.extract()\n\n# Pickle the data\nwith open('pystan_results_'+str(nSamplesPerChain)+'_samples_per_chain_'+str(nChains)+'_chains_'+stellarTypeSwitch+'_'+catalogSwitch+'.pkl', 'wb') as handle:\n    pickle.dump(extract,handle)\n    \n# Retrieve pickled data\n# with open('pystan_results.pkl', 'rb') as handle:\n#      retrieved_results = pickle.load(handle)\n#\n"

In [84]:
#=====================================================================
# Gather and save occurrence rate data for Stan sampler
# Could make this a function which takes as inputs
# the posterior samples array 
# the names of the files to save to
# nSamplesPerChain
# nChains
samples = posteriorSamples

# Define radius and period bin edges
radiusBinEdges = [0.67, 1.0, 1.5, 2.3, 3.4, 5.1, 7.6, 11, 17]
periodBinEdges = [10, 20, 40, 80, 160, 320, 640]

# Initialize dictionary
# occurrenceRateData = dict()

# Initialize lists for median, lower and upper occ rate limits
listMedian = [None]*(len(radiusBinEdges)-1)
listLowerCredibleInterval = [None]*(len(radiusBinEdges)-1)
listUpperCredibleInterval = [None]*(len(radiusBinEdges)-1)

# Loop over desired period and radius bins and compute occurrence rates
for iRadius in range(len(radiusBinEdges)-1):
    # Initialize lists for rows
    listMedianRow = []
    listLowerCredibleIntervalRow = []
    listUpperCredibleIntervalRow = []
    for iPeriod in range(len(periodBinEdges)-1):
        # Integrate the gamma posterior distribution over the selected phase space bin
        # NOTE -- This gives the exoplanet occurrence rate in that phase space bin
        int_gamma_samples = [None]*(len(samples))
        for i, theta in enumerate(samples):
            int_gamma_samples[i] = integrated_gamma(theta,periodBinEdges[iPeriod],periodBinEdges[iPeriod+1],
                                                   radiusBinEdges[iRadius],radiusBinEdges[iRadius+1],periodInsolationSwitch)

        # Print result for this bin
        print("\nRadius Bin "+str(radiusBinEdges[iRadius])+" to "+str(radiusBinEdges[iRadius+1])+" , Period Bin "+str(periodBinEdges[iPeriod])+" to "+str(periodBinEdges[iPeriod+1]))
        print("     Occurrence rate 16, 50, 84 percentile range : {0} ".format(np.percentile(int_gamma_samples,[16,50,84])))

        # Populate the dictionary with the result for this bin
        # binKey = "bin_" + str(radiusBinEdges[iRadius]) + "_" + str(periodBinEdges[iPeriod])
        # occurrenceRateData[binKey] = np.percentile(int_gamma_samples,[16,50,84])
        
        # Accumulate a row vector of median occurrence rates and their upper and lower credible intervals
        # for period bins at this radius
        listMedianRow.append(np.percentile(int_gamma_samples,[50]))
        listLowerCredibleIntervalRow.append(np.percentile(int_gamma_samples,[16]))
        listUpperCredibleIntervalRow.append(np.percentile(int_gamma_samples,[84]))
              
    # Append the row vector of results into the arrays for the median and lower and upper occ rate limits
    listMedian[iRadius] = listMedianRow
    listLowerCredibleInterval[iRadius] = listLowerCredibleIntervalRow
    listUpperCredibleInterval[iRadius] = listUpperCredibleIntervalRow
    
        
# Save the results in a dictionary
# with open('stan_occurrence_rate_data_'+stellarTypeSwitch+'_'+catalogSwitch+'.pkl', 'wb') as handle:
#    pickle.dump(occurrenceRateData, handle)
    
# Save the .csv files
np.savetxt('eta_stan_occurrence_rate_median_'+str(nSamplesPerChain)+'_samples_per_chain_'+str(nChains)+'_chains_'+stellarTypeSwitch+'_'+catalogSwitch+'.csv', np.array(listMedian), delimiter=",",fmt='%.5f')
np.savetxt('sigma_n_stan_occurrence_rate_lower_credible_interval_'+str(nSamplesPerChain)+'_samples_per_chain_'+str(nChains)+'_chains_'+stellarTypeSwitch+'_'+catalogSwitch+'.csv', np.array(listLowerCredibleInterval), delimiter=",",fmt='%.5f')
np.savetxt('sigma_p_stan_occurrence_rate_upper_credible_interval_'+str(nSamplesPerChain)+'_samples_per_chain_'+str(nChains)+'_chains_'+stellarTypeSwitch+'_'+catalogSwitch+'.csv', np.array(listUpperCredibleInterval), delimiter=",",fmt='%.5f')


Radius Bin 0.67 to 1.0 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.09725811  0.11307085  0.12727271] 

Radius Bin 0.67 to 1.0 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.22741211  0.24683187  0.26836057] 

Radius Bin 0.67 to 1.0 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.51195462  0.54697844  0.582913  ] 

Radius Bin 0.67 to 1.0 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 1.0744458   1.21831242  1.343642  ] 

Radius Bin 0.67 to 1.0 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 2.22744119  2.67024709  3.17075837] 

Radius Bin 0.67 to 1.0 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 4.61197297  5.88748294  7.55906637] 

Radius Bin 1.0 to 1.5 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.01974689  0.02292867  0.02582797] 

Radius Bin 1.0 to 1.5 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.04615807  0.05006612  0.05441887] 

Radius Bin 1.0 to 1.5 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.10400902  0.11087511  0.11815551] 

Radius Bin 1.0 to 1.5 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.21816474  0.24687457  0.27232553] 

Radius Bin 1.0 to 1.5 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.45224673  0.5427048   0.64269205] 

Radius Bin 1.0 to 1.5 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.93651333  1.19508215  1.5324062 ] 

Radius Bin 1.5 to 2.3 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.00399064  0.00462974  0.00521715] 

Radius Bin 1.5 to 2.3 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.00933113  0.01011277  0.01098872] 

Radius Bin 1.5 to 2.3 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.02103524  0.02237763  0.02384444] 

Radius Bin 1.5 to 2.3 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.04408707  0.04980387  0.05494302] 

Radius Bin 1.5 to 2.3 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.09139435  0.10976952  0.12968752] 

Radius Bin 1.5 to 2.3 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.18939603  0.24167603  0.30931236] 

Radius Bin 2.3 to 3.4 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.00069779  0.00080948  0.00091303] 

Radius Bin 2.3 to 3.4 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.00163336  0.00176936  0.00192173] 

Radius Bin 2.3 to 3.4 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.0036843   0.00391391  0.00416763] 

Radius Bin 2.3 to 3.4 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.00771591  0.00870044  0.00960067] 

Radius Bin 2.3 to 3.4 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.0159996   0.0192014   0.02266504] 

Radius Bin 2.3 to 3.4 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.0331665   0.04235463  0.05407376] 

Radius Bin 3.4 to 5.1 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [ 0.00014848  0.00017237  0.00019451] 

Radius Bin 3.4 to 5.1 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [ 0.00034807  0.00037684  0.00040904] 

Radius Bin 3.4 to 5.1 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.00078511  0.00083337  0.00088668] 

Radius Bin 3.4 to 5.1 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.00164453  0.00185122  0.00204232] 

Radius Bin 3.4 to 5.1 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.00340752  0.00409301  0.00482247] 

Radius Bin 3.4 to 5.1 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.00706716  0.00903411  0.01150709] 

Radius Bin 5.1 to 7.6 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [  2.91689076e-05   3.38867586e-05   3.82584487e-05] 

Radius Bin 5.1 to 7.6 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [  6.84413188e-05   7.40978754e-05   8.03747158e-05] 

Radius Bin 5.1 to 7.6 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [ 0.00015445  0.00016381  0.0001742 ] 

Radius Bin 5.1 to 7.6 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [ 0.00032357  0.00036411  0.00040117] 

Radius Bin 5.1 to 7.6 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.00067007  0.00080483  0.00094716] 

Radius Bin 5.1 to 7.6 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.00139049  0.00177638  0.00226081] 

Radius Bin 7.6 to 11 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [  5.73426090e-06   6.66858640e-06   7.53319541e-06] 

Radius Bin 7.6 to 11 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [  1.34609521e-05   1.45856549e-05   1.58065062e-05] 

Radius Bin 7.6 to 11 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [  3.04060243e-05   3.22449675e-05   3.42481394e-05] 

Radius Bin 7.6 to 11 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [  6.37280371e-05   7.17089722e-05   7.88711213e-05] 

Radius Bin 7.6 to 11 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [ 0.00013191  0.00015854  0.0001862 ] 

Radius Bin 7.6 to 11 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [ 0.00027376  0.00034947  0.00044458] 

Radius Bin 11 to 17 , Period Bin 10 to 20
     Occurrence rate 16, 50, 84 percentile range : [  1.39688261e-06   1.62631416e-06   1.83801170e-06] 

Radius Bin 11 to 17 , Period Bin 20 to 40
     Occurrence rate 16, 50, 84 percentile range : [  3.28151182e-06   3.55761834e-06   3.85279421e-06] 

Radius Bin 11 to 17 , Period Bin 40 to 80
     Occurrence rate 16, 50, 84 percentile range : [  7.41326978e-06   7.86471923e-06   8.34394669e-06] 

Radius Bin 11 to 17 , Period Bin 80 to 160
     Occurrence rate 16, 50, 84 percentile range : [  1.55525097e-05   1.74975261e-05   1.92135731e-05] 

Radius Bin 11 to 17 , Period Bin 160 to 320
     Occurrence rate 16, 50, 84 percentile range : [  3.21756136e-05   3.86946934e-05   4.53628493e-05] 

Radius Bin 11 to 17 , Period Bin 320 to 640
     Occurrence rate 16, 50, 84 percentile range : [  6.67982682e-05   8.52375902e-05   1.08323304e-04] 

In [85]:
PyStan_flatchain = np.hstack((lnf0.reshape(-1,1), beta.reshape(-1,1), alpha.reshape(-1,1)))
print(PyStan_flatchain.shape)
print(sampler.flatchain.shape)


(75000, 3)
(64000, 3)

In [86]:
# Integrate the gamma posterior distribution over the selected phase space bin
# NOTE -- This gives the exoplanet occurrence rate in that phase space bin!
for i, theta in enumerate(PyStan_flatchain):
    int_gamma_samples[i] = integrated_gamma(theta,x1,x2,radius1,radius2,periodInsolationSwitch)

# print occurrence rate for this parameter space region
print("16, 50, 84 percentile range for integrated gamma = {0} ".format(np.percentile(int_gamma_samples,[16,50,84])))

# Plot the posterior of int gamma_samples over selected planet radius and period range             
pl.hist(int_gamma_samples, 100, histtype="step", color="k")
pl.gca().set_yticklabels([])
if(periodInsolationSwitch == 'I'):
    pl.title("Integrated ocurrence rate over radius and insolation")
elif(periodInsolationSwitch == 'I'):
    pl.title("Integrated ocurrence rate over radius and period")
    
pl.xlabel(r"Integrated  $\Gamma$ (i.e. occurrence rate) in parameter space region [x1, x2, radius1, radius2] = {0}".format([x1,x2,radius1,radius2]));    
# pl.xlabel(r"$\log_{10}\Gamma_\oplus = \left. \log_{10}\mathrm{d}N / \mathrm{d}\ln P \, \mathrm{d}\ln R_p \right |_\oplus$");
pl.xticks
pl.gca().set_xlim([0,1])


16, 50, 84 percentile range for integrated gamma = [ 0.45224673  0.5427048   0.64269205] 
Out[86]:
(0, 1)

In [ ]: