When using Bayesian posterior sampling, we want to the posterior, ${\rm P}(M(\vec{\theta}) \vert D)$, of the model ($M(\vec{\theta})$) given one or more datasets ($D$) (i.e., plugin instances) and one model containing one or more sources with free parameters $\vec{\theta} $ given a likelihood $L(\vec{\theta}) = {\rm P}(D \vert M(\vec{\theta}))$ and prior ${\rm P}(\theta)$ on the parameters. Only the simplest posteriors allow for an analytic solution, thus we must rely on Monte Carlo or nested sampling methods to sample the posterior.
In 3ML, we currently provide two popular posterior sampling methods: emcee and MULTINEST. If you installed via conda, both packages are available, otherwise, only emcee is included.
"emcee is an extensible, pure-Python implementation of Goodman & Weare's Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler." It uses multiple "walkers" to explore the parameter space of the posterior. For a complete understanding of the capabilites and limitations, we recommend a thorough reading of Goodman & Weare (2010). Nevertheless we emphasize these N points to keep in mind:
Let's take a look at its usage for a simple likelihood.
In [4]:
%matplotlib notebook
import matplotlib.pyplot as plt
from threeML import *
from threeML.bayesian.tutorial_material import *
In [2]:
plt.style.use('mike')
Let’s get a BayesianAnalysis object like the one we would have in a normal 3ML analysis. We use a custom function, prepared for this tutorial, which gives a BayesianAnalysis object having a very simple model with one free parameter ($\mu$), and with a likelihood having a very simple shape:
In [6]:
# This returns a BayesianAnalysis object with a simple likelihood function,
# and the corresponding Model instance. These objects are what you will have
# in a typical 3ML analysis. The Model contains one point source, named "test",
# with a spectrum called "simple"
bayes, model = get_bayesian_analysis_object_simple_likelihood()
bayes.set_sampler('emcee')
# Let's look at the likelihood function, which in this illustrative example
# has a very simple shape
_ = plot_likelihood_function(bayes)
We must define a prior fo $\mu$ in order to sample. Let's use a uniform prior from 0 to 100. There are two ways to define this in 3ML:
In [7]:
# This directly assigns the prior
model.test.spectrum.main.shape.mu.prior = Uniform_prior(lower_bound=1, upper_bound=100.)
# Alternatively, we can set an uniformative prior on the parameter's bounds
model.test.spectrum.main.shape.mu.set_uninformative_prior(Uniform_prior)
Ok, we are ready to sample. We will use only 5 walkers for this problem. We need to define a burn_in sample length which is how long we want to run the sampler to approach the target distribution so that we do not include these "learning" samples in our final results. Additionally, we need to declare the number of samples each walker will take.
In [8]:
n_walkers = 5
burn_in = 10
n_samples = 500
bayes.sampler.setup(n_iterations=n_samples,n_burn_in=burn_in,n_walkers=n_walkers )
res = bayes.sample()
In [9]:
_ = plot_sample_path(bayes,truth=40.,burn_in=n_walkers*burn_in)
We can see that we did not burn in the sampler long enough and we will have part of the burn in included in the results as can be seen in the marginal distribution of $\mu$:
In [10]:
_ = bayes.results.corner_plot()
We could easily run the sampler longer, but let's reset the value of $\mu$ far from the target distribution and try more burn in samples.
In [11]:
model.test.spectrum.main.shape.mu =99
n_walkers = 2
burn_in = 200
n_samples = 500
res = bayes.sample()
In [18]:
_ = plot_sample_path(bayes,truth=40.,burn_in=n_walkers*burn_in)
_ = bayes.results.corner_plot()
The marginal distribution of the parameter $\mu$ is the answer for our parameter. It is the integral of the posterior over all other parameters, $\vec{\phi}$, given the data.
$${\rm P}(\mu \vert D) = \int_{-\infty}^{\infty} {\rm d}\vec{\phi} \; {\rm P}( \mu, \vec{\phi}\vert D )$$Of course, in our simple model, there are no other parameters. The marginal natuarally includes the dependence of all other parameters and is only equivalent to MLE error estimates under assumptions of Gaussianity and linearity.
We can sometimes use point-descriptors for the marginal, e.g., mean, median, etc., but when the marginal distribution is not easily symmetric, it can be more descriptive to plot it or speak of its highest posterior density interval:
In [12]:
this_mu = bayes.results.get_variates('test.spectrum.main.Simple.mu')
print(this_mu.highest_posterior_density_interval(cl=.68))
print(this_mu.highest_posterior_density_interval(cl=.95))
We have not paid much attention to our prior choice. So far, we have used an uninformative prior (except that we know in our example the range over which the parameter can be found), but what happens if we limit our prior such that we would never find the target distribution?
Let's set a uniform prior such that $\mu \in \{80-100 \}$.
In [13]:
model.test.spectrum.main.shape.mu =99
model.test.spectrum.main.shape.mu.prior = Uniform_prior(lower_bound=80, upper_bound=100)
n_walkers = 2
burn_in = 200
n_samples = 1000
bayes.sampler.setup(n_iterations=n_samples,n_burn_in=burn_in,n_walkers=n_walkers )
res = bayes.sample()
In [14]:
_ = plot_sample_path(bayes,truth=40.,burn_in=n_walkers*burn_in)
_ = bayes.results.corner_plot()
We see that the over this range, the marginal distribution is the same as the prior, i.e., uniformative. The "data" have not updated our information about $\mu$ and thus we cannot conclude anything. This is by design in our example, but if we were to believe that $\mu$ had to be found in the interval $\mu \in\{80-100\}$, the data have not helped us to determine anything.
Let's now look at setting a more informative prior on $\mu$. Suppose from either other measurements or physical insight, we are sure $\mu\sim 30$ with some uncertainty. Then we can impose a Gaussian prior over $\mu$.
In [15]:
model.test.spectrum.main.shape.mu =99
model.test.spectrum.main.shape.mu.bounds = (None, None)
model.test.spectrum.main.shape.mu.prior = Gaussian(mu=25,sigma=5)
In [16]:
n_walkers = 2
burn_in = 200
n_samples = 1000
bayes.sampler.setup(n_iterations=n_samples,n_burn_in=burn_in,n_walkers=n_walkers )
res = bayes.sample()
Our prior information has biased the results to slightly lower values of $\mu$. Again, this is only to illustrate how to use different priors and their effects on the recovered results.
In [17]:
_ = plot_sample_path(bayes,truth=40.,burn_in=n_walkers*burn_in)
_ = bayes.results.corner_plot()
We now examine a more complex likelihood with zeus. Zeus builds upon the approach of emcee and by adding on slice sampling.
In [19]:
bayes, model = get_bayesian_analysis_object_complex_likelihood()
bayes.set_sampler('zeus')
model.test.spectrum.main.shape.mu.prior = Uniform_prior(lower_bound=1, upper_bound=100)
_ = plot_likelihood_function(bayes)
We proceed as before and see if we can fully sample the likelihood
In [20]:
model.test.spectrum.main.shape.mu = 1.
n_walkers = 4
burn_in = 100
n_samples = 500
bayes.sampler.setup(n_iterations=n_samples,n_burn_in=burn_in,n_walkers=n_walkers )
res = bayes.sample()
In [21]:
_ = plot_sample_path(bayes,burn_in=n_walkers*burn_in)
_ = bayes.results.corner_plot()
We see that zeus explored the parameter space, but did not sample all the modes evenly.
We can try to explode the number of samples and see if it improves.
In [22]:
model.test.spectrum.main.shape.mu = 99.
n_walkers = 4
burn_in = 200
n_samples = 1000
bayes.sampler.setup(n_iterations=n_samples,n_burn_in=burn_in,n_walkers=n_walkers )
res = bayes.sample()
In [23]:
_ = plot_sample_path(bayes,burn_in=n_walkers*burn_in)
_ = bayes.results.corner_plot()
We were able search the parameter space and resolve some modes, but it is easy to see that any inference will be diffuse. We need to always investigate the samplers we use to make sure they are appropriate for the problem at hand
MULTINEST is a nested sampling algorithm that is designed to integrate the the posterior to obtain the marginal likelihood. For details on the algorithm see Feroz et al. (2009), Feroz et al. (2013), and for details on the input arguments for the python interface we implement, see the documentation of pymultinest. If you find these algorithms useful for your research, please cite the originals authors!
Let's resample the complex likelihood from above with MULTINEST using
In [27]:
bayes.set_sampler('multinest')
bayes.sampler.setup(n_live_points=400)
res = bayes.sample()
In [29]:
_ = plot_sample_path(bayes)
_ = bayes.results.corner_plot()
MULTINEST has fully sampled the likelihood and broken apart the modes. Cleary point-descriptors of the results will be inefficent, but we use the output files of MULTINEST to do multi-modal analysis. For details, consult the MULTINEST documentation.
Finally we can try with ultranest which is another nested sampling algorithm.
In [33]:
bayes.set_sampler('ultranest')
bayes.sampler.setup()
res = bayes.sample(quiet=True)
In [34]:
_ = plot_sample_path(bayes)
_ = bayes.results.corner_plot()
In [35]:
def f(mu):
return mu**2
this_mu = bayes.results.get_variates('test.spectrum.main.Complex.mu')
f_mu = f(this_mu)
print (f_mu)
fig, ax = plt.subplots()
ax.hist(f_mu.samples,bins=100)
_ =ax.set_xlabel(r'f($\mu$)')
As with any approach, the Bayesian capabilities of 3ML are a tool. Understanding and proper use of the posterior and marginal distributions requires special care. For further reading on the Bayesian analysis and its applications to spectral analysis check out the following resources:
In [ ]: