# Biased Likelihood Integrals and Samples

David Thomas 2017/03/10

Contents:

# Background

We have observed strange posterior sample behavior from our likelihood calculation and use this notebook to examine the individual integrals. We begin by fixing a set of hyperparameters and generating the distribution of integrals in the log likelihood calculation.



In [205]:

% matplotlib inline







In [206]:

from bigmali.grid import Grid
from bigmali.likelihood import BiasedLikelihood
from bigmali.prior import TinkerPrior
from bigmali.hyperparameter import get




In [207]:

import pandas as pd

grid = Grid()
prior = TinkerPrior(grid)
lum_obs = data.lum_obs[:10 ** 4]
z = data.z[:10 ** 4]
bl = BiasedLikelihood(grid, prior, lum_obs, z)



# Sample Size Sensitivity

Below we collect the values for the integrals corresponding to a subset of $10^4$ galaxies (with corresponding lum_obs and z). We collect these values when 100, 1,000, and 10,000 draws from the biased distribution are used in order to compare how sensitive the integrals are to the number of biased distribution samples. We also can examine the runtime and see that from 1,000 to 10,000 samples the runtime grows close to linearly.



In [25]:

hypers = get()
%time vals100 = map(lambda lum_obs,z: bl.single_integral(*(hypers + [lum_obs, z]), nsamples=100), lum_obs, z)
%time vals1000 = map(lambda lum_obs,z: bl.single_integral(*(hypers + [lum_obs, z]), nsamples=1000), lum_obs, z)
%time vals10000 = map(lambda lum_obs,z: bl.single_integral(*(hypers + [lum_obs, z]), nsamples=10000), lum_obs, z)




CPU times: user 2.89 s, sys: 23.3 ms, total: 2.91 s
Wall time: 2.94 s
CPU times: user 7.71 s, sys: 12 ms, total: 7.72 s
Wall time: 7.73 s
CPU times: user 55.1 s, sys: 69.7 ms, total: 55.2 s
Wall time: 55.3 s




In [42]:

import matplotlib.pyplot as plt
font = {'family' : 'normal',
'weight' : 'normal',
'size'   : 16}
plt.rc('font', **font)

plt.hist(vals100, alpha=0.5, label='nsamples=100', bins=20)
plt.hist(vals1000, alpha=0.5, label='nsamples=1000', bins=20)
plt.hist(vals10000, alpha=0.5, label='nsamples=10000', bins=20)
plt.title('Distribution of Integral Values At Different Sample Sizes')
plt.ylabel('Density')
plt.xlabel('Value')
plt.gcf().set_size_inches((10,6))
plt.legend(loc=2);






# Single Integral Convergence

This result is a bit surprising. It suggests we may not need many samples to characterize an integral. To examine this further let's isolate a single integral and see how its value changes as we increase the number of samples. We do this for three different galaxies to limit the chance we are making conclusions on outliers



In [109]:

single_lum_obs0 = lum_obs[0]
single_z0 = z[0]
single_lum_obs1 = lum_obs[100]
single_z1 = z[100]
single_lum_obs2 = lum_obs[5000]
single_z2 = z[5000]
space = np.linspace(1, 1000, 1000)
single_vals = np.zeros((1000,3))
for i, nsamples in enumerate(space):
single_vals[i,0] = bl.single_integral(*(hypers + [single_lum_obs0, single_z0]))
single_vals[i,1] = bl.single_integral(*(hypers + [single_lum_obs1, single_z1]))
single_vals[i,2] = bl.single_integral(*(hypers + [single_lum_obs2, single_z2]))

plt.subplot(311)
plt.plot(space, single_vals[:,0])
plt.title('Integral 0')
plt.xlabel('Biased Distribution Samples')
plt.ylabel('Log-Likelihood')

plt.subplot(312)
plt.plot(space, single_vals[:,1])
plt.title('Integral 1')
plt.xlabel('Biased Distribution Samples')
plt.ylabel('Log-Likelihood')

plt.subplot(313)
plt.plot(space, single_vals[:,2])
plt.title('Integral 2')
plt.xlabel('Biased Distribution Samples')
plt.ylabel('Log-Likelihood')
plt.gcf().set_size_inches((10,6))
plt.tight_layout()






# Individual Biased Samples

A few things are concerning about these results. First, they bobble within a window and never converge. Second, even one sample provides a reasonable approximation. We will need to start examining the individual weights of the biased samples. Below we get the internal weights used in a single integral. The keys of the dataframe correspond to the distributions here:

\begin{align*} v1 &= \ln P(L_{obs}|L^s, \sigma_L)\\ v2 &= \ln P(L^s|M^s, z, \alpha, S)\\ v3 &= \ln P(M^s|z)\\ v4 &= \ln Q(L^s|L_{obs}, \sigma_L)\\ v5 &= \ln Q(M^s|L^s, z, \alpha, S_M)\\ out &= logsumexp(v1 + v2 + v3 - v4 - v5) - log(nsamples)\\ \end{align*}
• v1 values pass sanity check
• v2 values pass sanity check
• v3 values pass sanity check
• v4 values pass sanity check
• v5 values pass sanity check


In [79]:

single_lum_obs0 = lum_obs[0]
single_z0 = z[0]
internals = bl.single_integral_samples_and_weights(*(hypers + [single_lum_obs0, single_z0]))
print single_lum_obs0
print single_z0
internals




13748.7935573
2.07709

Out[79]:

lum_samples
mu_mass
mass_samples
mu_lum
v1
v2
v3
v4
v5
out

0
14035.575642
2.074165e+11
1.952559e+11
13731.433262
-7.537149
-8.626620
-28.807932
-7.557793
-26.800461
-11.285231

1
14175.528071
2.131704e+11
1.352225e+12
27698.886782
-7.638769
-17.727312
-33.134140
-7.669335
-30.895619
-11.285231

2
14074.452676
2.090048e+11
6.254595e+11
20943.311962
-7.561520
-11.823416
-31.343171
-7.584930
-28.723536
-11.285231

3
13051.239089
1.697284e+11
9.788367e+10
10689.907120
-7.994128
-9.352014
-27.367470
-7.942060
-26.299559
-11.285231

4
14133.777575
2.114434e+11
8.995077e+10
10367.272808
-7.604446
-10.571723
-27.193491
-7.632062
-26.485912
-11.285231

5
13443.870696
1.841854e+11
2.217755e+11
14380.406497
-7.552514
-8.665789
-29.077917
-7.530086
-26.947356
-11.285231

6
14411.801070
2.231133e+11
1.969898e+12
31747.499868
-7.895525
-21.292894
-34.053489
-7.942621
-32.115103
-11.285231

7
13806.084836
1.981974e+11
7.151826e+10
9540.119042
-7.455371
-11.370847
-26.723765
-7.459529
-26.452047
-11.285231

8
14591.984576
2.308911e+11
4.264627e+11
18227.964223
-8.160470
-9.659634
-30.491133
-8.219991
-27.817881
-11.285231

9
13925.771735
2.029721e+11
3.794615e+10
7581.375816
-7.484630
-16.107135
-25.441055
-7.497420
-26.941551
-11.285231

10
13650.721684
1.921070e+11
4.264120e+11
18227.179324
-7.462162
-10.284352
-30.490881
-7.455003
-27.982038
-11.285231

11
12568.934103
1.529870e+11
3.566952e+10
7413.183717
-9.061951
-14.159706
-25.316445
-8.972228
-26.441328
-11.285231

12
13786.145113
1.974089e+11
6.341664e+11
21048.562079
-7.453385
-12.230603
-31.374374
-7.456098
-28.839026
-11.285231

13
13873.785792
2.008893e+11
2.642090e+11
15322.907891
-7.468293
-8.805460
-29.451556
-7.477343
-27.148137
-11.285231

14
14112.556314
2.105690e+11
7.083544e+11
21910.106461
-7.588299
-12.546567
-31.625370
-7.614413
-29.019172
-11.285231

15
13337.744093
1.802033e+11
6.317653e+11
21019.629919
-7.636175
-12.761846
-31.365983
-7.605822
-28.969313
-11.285231

16
14422.775677
2.235822e+11
2.114447e+11
14133.808394
-7.909981
-8.652407
-28.976512
-7.957839
-26.879774
-11.285231

17
13439.181794
1.840083e+11
2.725480e+11
15496.537329
-7.555668
-8.984952
-29.518197
-7.532891
-27.229421
-11.285231

18
13914.724136
2.025283e+11
2.507074e+11
15034.221981
-7.480696
-8.729676
-29.339518
-7.492692
-27.076977
-11.285231

19
15279.512494
2.621505e+11
9.852154e+10
10715.114380
-9.680571
-11.255598
-27.380645
-9.786133
-26.720893
-11.285231

20
14796.092316
2.399078e+11
9.457673e+10
10557.516556
-8.529778
-10.980068
-27.296636
-8.603190
-26.622202
-11.285231

21
13512.808997
1.868019e+11
1.735149e+11
13156.073608
-7.511861
-8.593446
-28.559243
-7.494548
-26.683550
-11.285231

22
12920.048497
1.650647e+11
6.240824e+10
9080.230003
-8.224955
-11.056630
-26.446196
-8.162784
-26.256893
-11.285231

23
14406.168214
2.228729e+11
1.094726e+11
11132.550830
-7.888191
-9.990628
-27.598688
-7.934897
-26.539720
-11.285231

24
13437.065523
1.839284e+11
1.810617e+11
13360.746348
-7.557107
-8.573966
-28.648575
-7.534173
-26.722832
-11.285231

25
13625.720542
1.911383e+11
3.226702e+11
16474.759354
-7.468083
-9.318376
-29.882498
-7.459091
-27.474183
-11.285231

26
13718.624426
1.947540e+11
3.986029e+11
17786.858268
-7.452878
-9.961901
-30.342778
-7.450681
-27.836805
-11.285231

27
11727.276818
1.263688e+11
1.068307e+11
11034.373491
-12.510250
-8.512431
-27.548087
-12.351216
-26.212956
-11.285231

28
14365.887897
2.211585e+11
1.994056e+11
13836.542787
-7.837450
-8.668733
-28.852569
-7.881355
-26.825970
-11.285231

29
13057.676655
1.699594e+11
4.613399e+10
8137.967340
-7.983906
-13.078844
-25.834077
-7.932331
-26.432705
-11.285231

...
...
...
...
...
...
...
...
...
...
...

70
13440.547813
1.840599e+11
3.031802e+11
16106.747066
-7.554744
-9.237698
-29.747681
-7.532069
-27.395962
-11.285231

71
12609.513082
1.543531e+11
1.135357e+11
11280.635493
-8.948347
-8.761260
-27.673747
-8.861847
-26.315715
-11.285231

72
12693.101690
1.571914e+11
3.068687e+11
16177.525618
-8.728472
-9.709600
-29.773808
-8.648580
-27.533773
-11.285231

73
13678.908365
1.932030e+11
1.944617e+11
13711.155550
-7.457106
-8.591259
-28.799456
-7.452010
-26.794099
-11.285231

74
12458.541303
1.493099e+11
3.542790e+11
17042.598671
-9.394122
-10.488486
-30.085398
-9.295577
-27.866932
-11.285231

75
13710.449263
1.944341e+11
4.926316e+11
19206.665620
-7.453473
-10.898036
-30.809424
-7.450680
-28.271168
-11.285231

76
13635.335763
1.915105e+11
8.428594e+10
10125.608001
-7.465646
-10.384074
-27.059908
-7.457359
-26.384809
-11.285231

77
13121.810582
1.722715e+11
1.983662e+11
13810.347693
-7.887629
-8.602612
-28.841344
-7.840954
-26.826555
-11.285231

78
14295.365802
2.181773e+11
6.979112e+11
21792.424455
-7.755868
-12.240470
-31.591287
-7.794853
-28.928533
-11.285231

79
13222.990701
1.759598e+11
5.570756e+10
8713.858994
-7.756020
-12.084512
-26.215464
-7.717026
-26.382029
-11.285231

80
14113.745440
2.106179e+11
2.551620e+11
15130.542012
-7.589181
-8.720587
-29.376890
-7.615379
-27.089056
-11.285231

81
13805.740441
1.981837e+11
5.384971e+10
8607.343272
-7.455330
-13.127516
-26.146873
-7.459463
-26.585681
-11.285231

82
13961.032136
2.043926e+11
3.620600e+11
17177.384116
-7.498847
-9.483278
-30.132710
-7.514166
-27.622767
-11.285231

83
14318.385851
2.191476e+11
1.910269e+11
13622.840452
-7.781477
-8.687125
-28.761555
-7.822070
-26.788200
-11.285231

84
14257.740927
2.165974e+11
1.796620e+11
13323.201145
-7.716161
-8.725798
-28.632020
-7.752510
-26.737060
-11.285231

85
14380.511481
2.217800e+11
7.835900e+11
22726.905640
-7.855525
-12.889250
-31.855435
-7.900448
-29.197066
-11.285231

86
12937.161993
1.656684e+11
1.286716e+11
11804.319683
-8.192387
-8.705702
-27.933836
-8.131540
-26.421565
-11.285231

87
13209.297212
1.754577e+11
3.306709e+11
16621.726879
-7.772395
-9.627088
-29.935532
-7.732365
-27.579401
-11.285231

88
14124.733376
2.110704e+11
2.317434e+11
14611.494111
-7.597457
-8.646499
-29.171417
-7.624433
-26.975000
-11.285231

89
13997.128003
2.058533e+11
8.231101e+10
10038.928027
-7.516002
-10.854681
-27.011265
-7.533903
-26.466693
-11.285231

90
13470.336192
1.851871e+11
1.468467e+11
12383.618867
-7.535644
-8.719274
-28.208985
-7.515183
-26.547320
-11.285231

91
13582.589890
1.894743e+11
7.518737e+10
9714.766893
-7.481497
-10.861981
-26.825914
-7.469334
-26.385049
-11.285231

92
13158.028012
1.735860e+11
2.225101e+11
14397.660615
-7.837689
-8.716718
-29.085073
-7.793770
-26.967870
-11.285231

93
13292.593955
1.785260e+11
2.100250e+11
14099.324594
-7.679645
-8.632903
-28.962260
-7.645901
-26.887792
-11.285231

94
13200.221946
1.751255e+11
2.713736e+11
15472.292380
-7.783495
-9.067057
-29.508905
-7.742777
-27.248874
-11.285231

95
14289.103974
2.179139e+11
6.228961e+10
9073.967630
-7.749075
-12.816530
-26.442309
-7.787621
-26.649212
-11.285231

96
14111.841785
2.105396e+11
5.781469e+10
8831.960588
-7.587771
-13.076318
-26.290840
-7.613834
-26.639351
-11.285231

97
14601.275342
2.312967e+11
5.797852e+11
20375.306866
-8.175705
-10.908183
-31.172779
-8.235863
-28.421518
-11.285231

98
14289.967662
2.179502e+11
1.885593e+11
13558.767120
-7.750007
-8.690799
-28.733941
-7.788614
-26.776543
-11.285231

99
13875.092701
2.009415e+11
3.411082e+11
16810.082693
-7.468636
-9.352099
-30.002575
-7.477780
-27.533455
-11.285231

100 rows × 10 columns




In [130]:

import seaborn as sns

cols = ['v1','v2','v3','v4','v5']
plt.title('Correlation Between Various Weights')
sns.heatmap(internals[cols].corr(), xticklabels=cols, yticklabels=cols);






# Large Contributors



In [120]:

font = {'family' : 'normal',
'weight' : 'normal',
'size'   : 16}

z_z1 = z[z < 1]
z_z2 = z[z > 1]
lum_obs_z1 = lum_obs[z < 1]
lum_obs_z2 = lum_obs[z > 1]
vals100_z1 = np.array(vals100)[np.where(z < 1)]
vals100_z2 = np.array(vals100)[np.where(z > 1)]

plt.subplot(121)
plt.hist(z, bins=100, alpha=0.6)
plt.xlabel('Value')
plt.ylabel('Count')
plt.title('z Histogram')

plt.subplot(122)
plt.scatter(lum_obs_z1, vals100_z1, label='z < 1', alpha=0.1)
plt.scatter(lum_obs_z2, vals100_z2, label='z > 1', alpha=0.1, color='green')
plt.legend()
plt.ylabel('Log-Likelihood')
plt.xlabel('Luminosity')
plt.title('Log-Likelihood vs Luminosity')

plt.gcf().set_size_inches((14,5))
plt.tight_layout()






# Changes At Higher Posterior Points

The next question I want to explore is: How do our biased distributions change as we move towards more probable posterior samples?



In [133]:

hypers2 = hypers
hypers2[-1] = 1
internals2 = bl.single_integral_samples_and_weights(*(hypers2 + [single_lum_obs0, single_z0]))




In [141]:

plt.scatter(np.log(internals['mass_samples']), np.log(internals['lum_samples']), color='blue', label='S = 0.155')
plt.scatter(np.log(internals2['mass_samples']), np.log(internals2['lum_samples']), color='green', label='S = 1')
plt.scatter(np.log(data['mass'][:100]), np.log(data['lum'][:100]), color='red', label='True')
plt.gca().axhline(np.log(single_lum_obs0), color='k', label='lum_obs')
plt.title('Scatter Plots for S=0.155, S=1')
plt.ylabel('Log-Luminosity')
plt.xlabel('Log-Mass')
plt.legend();







In [142]:

hypers3 = hypers
hypers3[-1] = 10
internals3 = bl.single_integral_samples_and_weights(*(hypers3 + [single_lum_obs0, single_z0]))
plt.scatter(np.log(internals['mass_samples']), np.log(internals['lum_samples']), color='blue', label='S = 0.155')
plt.scatter(np.log(internals3['mass_samples']), np.log(internals3['lum_samples']), color='green', label='S = 10')
plt.scatter(np.log(data['mass'][:100]), np.log(data['lum'][:100]), color='red', label='True')
plt.gca().axhline(np.log(single_lum_obs0), color='k', label='lum_obs')
plt.title('Scatter Plots for S=0.155, S=1')
plt.ylabel('Log-Luminosity')
plt.xlabel('Log-Mass')
plt.legend();






Need to think about handling out of bounds of prior more gracefully. Look forward to discussing with Phil ...



In [161]:

print np.sum(vals100)
print np.sum(vals100_s10)




-102077.239235
-80983.1144049




In [160]:

font = {'family' : 'normal',
'weight' : 'normal',
'size'   : 16}

vals100_s10 = map(lambda lum_obs,z: bl.single_integral(*(hypers3 + [lum_obs, z]), nsamples=100), lum_obs, z)

z_z1 = z[z < 1]
z_z2 = z[z > 1]
lum_obs_z1 = lum_obs[z < 1]
lum_obs_z2 = lum_obs[z > 1]
vals100_s10_z1 = np.array(vals100)[np.where(z < 1)]
vals100_s10_z2 = np.array(vals100)[np.where(z > 1)]

plt.subplot(121)
plt.hist(z, bins=100, alpha=0.6)
plt.xlabel('Value')
plt.ylabel('Count')
plt.title('z Histogram')

plt.subplot(122)
plt.scatter(lum_obs_z1, vals100_s10_z1, label='z < 1', alpha=0.2, s=1)
plt.scatter(lum_obs_z2, vals100_s10_z2, label='z > 1', alpha=0.2, color='green', s=2)
plt.scatter(lum_obs_z1, vals100_z1, label='z < 1', alpha=0.2, color='red', s=2)
plt.scatter(lum_obs_z2, vals100_z2, label='z > 1', alpha=0.2, color='orange', s=2)
plt.legend()
plt.ylabel('Log-Likelihood')
plt.xlabel('Luminosity')
plt.title('Log-Likelihood vs Luminosity')

plt.gcf().set_size_inches((14,5))
plt.tight_layout()






Our posterior is chasing after low mass!?



In [148]:

internals.describe()




Out[148]:

lum_samples
mu_mass
mass_samples
mu_lum
v1
v2
v3
v4
v5
out

count
100.000000
1.000000e+02
1.000000e+02
100.000000
100.000000
100.000000
100.000000
100.000000
100.000000
1.000000e+02

mean
13751.970700
1.970838e+11
3.017759e+11
14768.582020
-7.892110
-10.448510
-28.985493
-7.891248
-27.305331
-1.128523e+01

std
641.902222
2.502745e+10
3.116848e+11
4733.888542
0.718972
2.352304
1.846642
0.706045
1.069994
1.785306e-15

min
11727.276818
1.263688e+11
3.566952e+10
7413.183717
-12.510250
-21.292894
-34.053489
-12.351216
-32.115103
-1.128523e+01

25%
13388.681034
1.821080e+11
1.111291e+11
11193.242428
-7.898417
-11.202982
-30.034636
-7.932973
-27.646135
-1.128523e+01

50%
13794.174660
1.977262e+11
2.219381e+11
14384.227669
-7.606942
-9.643361
-29.079498
-7.619906
-26.957613
-1.128523e+01

75%
14144.215199
2.118751e+11
3.461873e+11
16900.062693
-7.502149
-8.724495
-27.629572
-7.509081
-26.591840
-1.128523e+01

max
15279.512494
2.621505e+11
1.969898e+12
31747.499868
-7.452017
-8.512431
-25.316445
-7.450680
-26.212956
-1.128523e+01




In [165]:

internals['v1'].mean() + internals['v2'].mean() + internals['v3'].mean() - internals['v4'].mean() - internals['v5'].mean()




Out[165]:

-12.129533027783467




In [166]:

from scipy.misc import logsumexp

logsumexp(internals['v1'] + internals['v2'] + internals['v3'] - internals['v4'] - internals['v5'])




Out[166]:

-6.680060504829866




In [150]:

internals3.describe()




Out[150]:

lum_samples
mu_mass
mass_samples
mu_lum
v1
v2
v3
v4
v5
out

count
100.000000
1.000000e+02
1.000000e+02
100.000000
100.000000
100.000000
100.000000
100.000000
100.000000
100.000000

mean
13828.439664
2.004851e+11
8.981305e+13
92767.168034
-8.038982
-12.773495
-40.035028
-8.043304
-33.858256
-8.752694

std
748.718975
2.994218e+10
7.651411e+13
73102.507302
0.890569
0.055343
13.708320
0.893348
4.555018
0.000000

min
11777.149545
1.278564e+11
1.256556e+10
5078.138171
-12.243896
-12.923567
-51.508777
-12.089106
-37.634227
-8.752694

25%
13341.524783
1.803443e+11
1.256556e+10
5078.138171
-8.116801
-12.812767
-51.508777
-8.163844
-37.633303
-8.752694

50%
13780.882892
1.972013e+11
1.545902e+14
154434.490718
-7.703848
-12.774415
-51.508777
-7.705133
-37.632949
-8.752694

75%
14339.998403
2.200613e+11
1.545902e+14
154434.490718
-7.522977
-12.739025
-23.242236
-7.519431
-28.210014
-8.752694

max
15934.631754
2.943292e+11
1.545902e+14
154434.490718
-7.452375
-12.628557
-23.242236
-7.450749
-28.209794
-8.752694




In [162]:

internals3['v1'].mean() + internals3['v2'].mean() + internals3['v3'].mean() - internals3['v4'].mean() - internals3['v5'].mean()




Out[162]:

-18.945945233658819




In [164]:

from scipy.misc import logsumexp

logsumexp(internals3['v1'] + internals3['v2'] + internals3['v3'] - internals3['v4'] - internals3['v5'])




Out[164]:

-4.1475235959339063




In [168]:

internals['arg'] = internals['v1'] + internals['v2'] + internals['v3'] - internals['v4'] - internals['v5']
internals3['arg'] = internals3['v1'] + internals3['v2'] + internals3['v3'] - internals3['v4'] - internals3['v5']




In [179]:

plt.title('The Value of logexpsum Arg')
plt.xlabel('Value')
plt.ylabel('Count')
plt.hist(internals['arg'], bins=20, alpha=0.5)
plt.hist(internals3['arg'], bins=20, alpha=0.5);







In [177]:

internals.describe()




Out[177]:

lum_samples
mu_mass
mass_samples
mu_lum
v1
v2
v3
v4
v5
out
arg

count
100.000000
1.000000e+02
1.000000e+02
100.000000
100.000000
100.000000
100.000000
100.000000
100.000000
1.000000e+02
100.000000

mean
13751.970700
1.970838e+11
3.017759e+11
14768.582020
-7.892110
-10.448510
-28.985493
-7.891248
-27.305331
-1.128523e+01
-12.129533

std
641.902222
2.502745e+10
3.116848e+11
4733.888542
0.718972
2.352304
1.846642
0.706045
1.069994
1.785306e-15
2.160175

min
11727.276818
1.263688e+11
3.566952e+10
7413.183717
-12.510250
-21.292894
-34.053489
-12.351216
-32.115103
-1.128523e+01
-23.184183

25%
13388.681034
1.821080e+11
1.111291e+11
11193.242428
-7.898417
-11.202982
-30.034636
-7.932973
-27.646135
-1.128523e+01
-12.605946

50%
13794.174660
1.977262e+11
2.219381e+11
14384.227669
-7.606942
-9.643361
-29.079498
-7.619906
-26.957613
-1.128523e+01
-11.428730

75%
14144.215199
2.118751e+11
3.461873e+11
16900.062693
-7.502149
-8.724495
-27.629572
-7.509081
-26.591840
-1.128523e+01
-10.762784

max
15279.512494
2.621505e+11
1.969898e+12
31747.499868
-7.452017
-8.512431
-25.316445
-7.450680
-26.212956
-1.128523e+01
-10.006596




In [174]:

internals3




Out[174]:

lum_samples
mu_mass
mass_samples
mu_lum
v1
v2
v3
v4
v5
out
arg

0
14269.304782
2.170822e+11
1.545902e+14
154434.490718
-7.728080
-12.815751
-51.508777
-7.765240
-37.633097
-8.752694
-26.654272

1
14491.895931
2.265497e+11
1.545902e+14
154434.490718
-8.006075
-12.830863
-51.508777
-8.058714
-37.633009
-8.752694
-26.653992

2
14398.052423
2.225268e+11
1.545902e+14
154434.490718
-7.877727
-12.824520
-51.508777
-7.923869
-37.633046
-8.752694
-26.654110

3
13342.267495
1.803719e+11
1.256556e+10
5078.138171
-7.632082
-12.724882
-23.242236
-7.602067
-28.209893
-8.752694
-7.787238

4
13826.546978
1.990086e+11
1.545902e+14
154434.490718
-7.458273
-12.784987
-51.508777
-7.463912
-37.633276
-8.752694
-26.654849

5
14258.309302
2.166212e+11
1.545902e+14
154434.490718
-7.716741
-12.814999
-51.508777
-7.753130
-37.633101
-8.752694
-26.654286

6
13723.471996
1.949438e+11
1.545902e+14
154434.490718
-7.452592
-12.777685
-51.508777
-7.450749
-37.633319
-8.752694
-26.654986

7
13298.985535
1.787629e+11
1.545902e+14
154434.490718
-7.673202
-12.747030
-51.508777
-7.639939
-37.633501
-8.752694
-26.655570

8
14226.338266
2.152843e+11
1.256556e+10
5078.138171
-7.685075
-12.789680
-23.242236
-7.719219
-28.210046
-8.752694
-7.787727

9
13877.572127
2.010405e+11
1.545902e+14
154434.490718
-7.469296
-12.788581
-51.508777
-7.478619
-37.633255
-8.752694
-26.654781

10
15335.377708
2.648024e+11
1.256556e+10
5078.138171
-9.837336
-12.865549
-23.242236
-9.946548
-28.210236
-8.752694
-7.788338

11
15934.631754
2.943292e+11
1.545902e+14
154434.490718
-11.805745
-12.923567
-51.508777
-11.953289
-37.632486
-8.752694
-26.652314

12
14050.379005
2.080204e+11
1.256556e+10
5078.138171
-7.546076
-12.777107
-23.242236
-7.567774
-28.210015
-8.752694
-7.787630

13
12016.194429
1.351418e+11
1.545902e+14
154434.490718
-11.080504
-12.648136
-51.508777
-10.945809
-37.634104
-8.752694
-26.657505

14
14234.655665
2.156316e+11
1.545902e+14
154434.490718
-7.693126
-12.813378
-51.508777
-7.727854
-37.633110
-8.752694
-26.654317

15
13775.154602
1.969752e+11
1.545902e+14
154434.490718
-7.452646
-12.781353
-51.508777
-7.454562
-37.633297
-8.752694
-26.654917

16
13446.773753
1.842951e+11
1.545902e+14
154434.490718
-7.550586
-12.757811
-51.508777
-7.528374
-37.633437
-8.752694
-26.655364

17
13438.259800
1.839735e+11
1.545902e+14
154434.490718
-7.556294
-12.757193
-51.508777
-7.533448
-37.633440
-8.752694
-26.655376

18
13339.296648
1.802612e+11
3.069508e+11
16179.096005
-7.634765
-12.720179
-29.774397
-7.604528
-31.404557
-8.752694
-11.120257

19
13693.961368
1.937899e+11
1.256556e+10
5078.138171
-7.455106
-12.751154
-23.242236
-7.451110
-28.209954
-8.752694
-7.787433

20
14097.208660
2.099381e+11
1.545902e+14
154434.490718
-7.577170
-12.803907
-51.508777
-7.602196
-37.633165
-8.752694
-26.654493

21
14534.583933
2.283949e+11
1.545902e+14
154434.490718
-8.069736
-12.833734
-51.508777
-8.125316
-37.632993
-8.752694
-26.653939

22
13463.057577
1.849113e+11
1.545902e+14
154434.490718
-7.540126
-12.758992
-51.508777
-7.519125
-37.633430
-8.752694
-26.655341

23
14335.574562
2.198739e+11
1.256556e+10
5078.138171
-7.801245
-12.797408
-23.242236
-7.843038
-28.210064
-8.752694
-7.787787

24
14198.922119
2.141420e+11
1.545902e+14
154434.490718
-7.659474
-12.810924
-51.508777
-7.691689
-37.633125
-8.752694
-26.654362

25
13660.519054
1.924875e+11
1.256556e+10
5078.138171
-7.460210
-12.748685
-23.242236
-7.453769
-28.209948
-8.752694
-7.787414

26
13366.117408
1.812625e+11
1.545902e+14
154434.490718
-7.611278
-12.751942
-51.508777
-7.583050
-37.633472
-8.752694
-26.655476

27
13460.541035
1.848160e+11
1.545902e+14
154434.490718
-7.541704
-12.758810
-51.508777
-7.520515
-37.633431
-8.752694
-26.655345

28
14594.166728
2.309863e+11
1.545902e+14
154434.490718
-8.164034
-12.837729
-51.508777
-8.223705
-37.632970
-8.752694
-26.653866

29
14201.111768
2.142331e+11
1.545902e+14
154434.490718
-7.661465
-12.811075
-51.508777
-7.693835
-37.633124
-8.752694
-26.654359

...
...
...
...
...
...
...
...
...
...
...
...

70
13645.114594
1.918895e+11
1.256556e+10
5078.138171
-7.463372
-12.747545
-23.242236
-7.455803
-28.209946
-8.752694
-7.787406

71
13359.518478
1.810158e+11
1.256556e+10
5078.138171
-7.616903
-12.726186
-23.242236
-7.588181
-28.209896
-8.752694
-7.787248

72
13259.610966
1.773070e+11
1.545902e+14
154434.490718
-7.714412
-12.744138
-51.508777
-7.678184
-37.633518
-8.752694
-26.655625

73
13585.546784
1.895881e+11
2.901048e+12
36531.449925
-7.480447
-12.743178
-35.040181
-7.468503
-33.651815
-8.752694
-14.143488

74
14805.169720
2.403139e+11
1.256556e+10
5078.138171
-8.547863
-12.829980
-23.242236
-8.621888
-28.210145
-8.752694
-7.788046

75
14041.034586
2.076390e+11
1.545902e+14
154434.490718
-7.540390
-12.800010
-51.508777
-7.561423
-37.633188
-8.752694
-26.654566

76
14760.351460
2.383130e+11
1.256556e+10
5078.138171
-8.459929
-12.826916
-23.242236
-8.530923
-28.210137
-8.752694
-7.788021

77
12176.240254
1.401642e+11
1.545902e+14
154434.490718
-10.402640
-12.661031
-51.508777
-10.281175
-37.634024
-8.752694
-26.657249

78
12982.033092
1.672579e+11
1.545902e+14
154434.490718
-8.110514
-12.723503
-51.508777
-8.053130
-37.633642
-8.752694
-26.656023

79
14885.846965
2.439427e+11
1.545902e+14
154434.490718
-8.714685
-12.857053
-51.508777
-8.794145
-37.632859
-8.752694
-26.653511

80
14552.199317
2.291591e+11
1.256556e+10
5078.138171
-8.096957
-12.812563
-23.242236
-8.153749
-28.210102
-8.752694
-7.787907

81
14934.869213
2.461647e+11
1.545902e+14
154434.490718
-8.821346
-12.860264
-51.508777
-8.904094
-37.632841
-8.752694
-26.653453

82
13117.871011
1.721289e+11
1.545902e+14
154434.490718
-7.893254
-12.733655
-51.508777
-7.846278
-37.633581
-8.752694
-26.655827

83
13677.130458
1.931337e+11
1.545902e+14
154434.490718
-7.457375
-12.774384
-51.508777
-7.452149
-37.633339
-8.752694
-26.655049

84
13406.213713
1.827661e+11
1.256556e+10
5078.138171
-7.579251
-12.729709
-23.242236
-7.554018
-28.209904
-8.752694
-7.787274

85
13971.335583
2.048089e+11
1.545902e+14
154434.490718
-7.503476
-12.795153
-51.508777
-7.519533
-37.633217
-8.752694
-26.654657

86
13549.832563
1.882168e+11
1.256556e+10
5078.138171
-7.494410
-12.740469
-23.242236
-7.479833
-28.209929
-8.752694
-7.787353

87
14906.699221
2.448863e+11
1.545902e+14
154434.490718
-8.759569
-12.858420
-51.508777
-8.840429
-37.632852
-8.752694
-26.653486

88
14463.226557
2.253158e+11
1.256556e+10
5078.138171
-7.965164
-12.806366
-23.242236
-8.015823
-28.210086
-8.752694
-7.787857

89
13954.370455
2.041238e+11
1.545902e+14
154434.490718
-7.495968
-12.793967
-51.508777
-7.510809
-37.633223
-8.752694
-26.654680

90
13065.228364
1.702306e+11
1.545902e+14
154434.490718
-7.972045
-12.729733
-51.508777
-7.921049
-37.633605
-8.752694
-26.655903

91
13567.408593
1.888909e+11
1.256556e+10
5078.138171
-7.487187
-12.741778
-23.242236
-7.473907
-28.209932
-8.752694
-7.787363

92
13279.364419
1.780364e+11
1.545902e+14
154434.490718
-7.693283
-12.745590
-51.508777
-7.658544
-37.633510
-8.752694
-26.655598

93
11777.149545
1.278564e+11
1.545902e+14
154434.490718
-12.243896
-12.628557
-51.508777
-12.089106
-37.634227
-8.752694
-26.657898

94
13786.611183
1.974274e+11
1.545902e+14
154434.490718
-7.453422
-12.782164
-51.508777
-7.456168
-37.633293
-8.752694
-26.654902

95
12757.003190
1.593835e+11
1.545902e+14
154434.490718
-8.573037
-12.706452
-51.508777
-8.498166
-37.633745
-8.752694
-26.656354

96
12765.675291
1.596825e+11
1.256556e+10
5078.138171
-8.552777
-12.680288
-23.242236
-8.478586
-28.209794
-8.752694
-7.786921

97
12661.168787
1.561032e+11
1.545902e+14
154434.490718
-8.810239
-12.699099
-51.508777
-8.727827
-37.633790
-8.752694
-26.656498

98
13870.288458
2.007497e+11
1.256556e+10
5078.138171
-7.467393
-12.764076
-23.242236
-7.476191
-28.209984
-8.752694
-7.787530

99
13159.655695
1.736453e+11
1.256556e+10
5078.138171
-7.835519
-12.710968
-23.242236
-7.791723
-28.209862
-8.752694
-7.787138

100 rows × 11 columns



FOUND THE ISSUE!

# Conclusion

Our current Tinker10 mass prior favors the lower mass points so heavily that it outweighs other components of the likelihood and dominates the posterior probability. In order to get meaningful results we will have to devise a way to get around this dilema.



In [223]:

plt.hist(data['z'][:10000], normed=True, alpha=0.6)




Out[223]:

(array([ 1.23359034,  0.66129504,  0.        ,  0.        ,  0.        ,
0.        ,  0.        ,  0.        ,  3.34477205,  0.15426622]),
array([ 0.478702 ,  0.6640958,  0.8494896,  1.0348834,  1.2202772,
1.405671 ,  1.5910648,  1.7764586,  1.9618524,  2.1472462,  2.33264  ]),
<a list of 10 Patch objects>)




In [220]:

plt.title('True Mass PDF')
plt.hist(np.log(data['mass'][:10000]) / np.log(10), normed=True, alpha=0.6)
plt.gca().set_yscale("log")
plt.xlabel('Log Mass')
plt.ylabel('Density')




Out[220]:

<matplotlib.text.Text at 0x10fcb2710>




In [218]:

plt.title('Prior Mass PDF Evaluations')
space = np.linspace(24, 30, 100)
vals = prior.pdf(np.exp(space), 1.0)
plt.plot(space, vals)
plt.gca().set_yscale("log")
plt.xlabel('Log Mass')
plt.ylabel('Density');