Probabilistic programming (PP) allows flexible specification of Bayesian statistical models in code. PyMC3 is a new, open-source PP framework with an intuitive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features next-generation Markov chain Monte Carlo (MCMC) sampling algorithms such as the No-U-Turn Sampler (NUTS; Hoffman, 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC; Duane, 1987). This class of samplers works well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo, which means you usually don't need to have specialized knowledge about how the algorithms work. PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC.
Probabilistic programming in Python confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython. These features make it relatively straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis.
PyMC3's feature set helps to make Bayesian analysis as painless as possible. Here is a short list of some of its features:
Here, we present a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. We will first see the basics of how to use PyMC3, motivated by a simple example: installation, data creation, model definition, model fitting and posterior analysis. Then we will cover two case studies and use them to show how to define and fit more sophisticated models. Finally we will show how to extend PyMC3 and discuss other useful features: the Generalized Linear Models subpackage, custom distributions, custom transformations and alternative storage backends.
In [ ]:
%load ../data/melanoma_data.py
In [ ]:
%matplotlib inline
import seaborn as sns; sns.set_context('notebook')
from pymc3 import Normal, Model, DensityDist, sample, log, exp
with Model() as melanoma_survival:
# Convert censoring indicators to indicators for failure event
failure = (censored==0).astype(int)
# Parameters (intercept and treatment effect) for survival rate
beta = Normal('beta', mu=0.0, sd=1e5, shape=2)
# Survival rates, as a function of treatment
lam = exp(beta[0] + beta[1]*treat)
# Survival likelihood, accounting for censoring
def logp(failure, value):
return (failure * log(lam) - lam * value).sum()
x = DensityDist('x', logp, observed={'failure':failure, 'value':t})
This example will generate 1000 posterior samples.
In [ ]:
with melanoma_survival:
trace = sample(1000)
In [ ]:
from pymc3 import traceplot
traceplot(trace[500:], varnames=['beta']);
Consider the following time series of recorded coal mining disasters in the UK from 1851 to 1962 (Jarrett, 1979). The number of disasters is thought to have been affected by changes in safety regulations during this period.
Let's build a model for this series and attempt to estimate when the change occurred.
In [ ]:
import numpy as np
import matplotlib.pyplot as plt
disasters_data = np.array([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
n_years = len(disasters_data)
plt.figure(figsize=(12.5, 3.5))
plt.bar(np.arange(1851, 1962), disasters_data, color="#348ABD")
plt.xlabel("Year")
plt.ylabel("Disasters")
plt.title("UK coal mining disasters, 1851-1962")
plt.xlim(1851, 1962);
We represent our conceptual model formally as a statistical model:
$$\begin{array}{ccc} (y_t | \tau, \lambda_1, \lambda_2) \sim\text{Poisson}\left(r_t\right), & r_t=\left\{ \begin{array}{lll} \lambda_1 &\text{if}& t< \tau\\ \lambda_2 &\text{if}& t\ge \tau \end{array}\right.,&t\in[t_l,t_h]\\ \tau \sim \text{DiscreteUniform}(t_l, t_h)\\ \lambda_1\sim \text{Exponential}(a)\\ \lambda_2\sim \text{Exponential}(b) \end{array}$$Because we have defined $y$ by its dependence on $\tau$, $\lambda_1$ and $\lambda_2$, the latter three are known as the parents of $y$ and $D$ is called their child. Similarly, the parents of $\tau$ are $t_l$ and $t_h$, and $\tau$ is the child of $t_l$ and $t_h$.
At the model-specification stage (before the data are observed), $y$, $\tau$, $\lambda_1$, and $\lambda_2$ are all random variables. Bayesian "random" variables have not necessarily arisen from a physical random process. The Bayesian interpretation of probability is epistemic, meaning random variable $x$'s probability distribution $p(x)$ represents our knowledge and uncertainty about $x$'s value. Candidate values of $x$ for which $p(x)$ is high are relatively more probable, given what we know.
We can generally divide the variables in a Bayesian model into two types: stochastic and deterministic. The only deterministic variable in this model is $r$. If we knew the values of $r$'s parents, we could compute the value of $r$ exactly. A deterministic like $r$ is defined by a mathematical function that returns its value given values for its parents. Deterministic variables are sometimes called the systemic part of the model. The nomenclature is a bit confusing, because these objects usually represent random variables; since the parents of $r$ are random, $r$ is random also.
On the other hand, even if the values of the parents of variables switchpoint
, disasters
(before observing the data), early_mean
or late_mean
were known, we would still be uncertain of their values. These variables are stochastic, characterized by probability distributions that express how plausible their candidate values are, given values for their parents.
Let's begin by defining the unknown switchpoint as a discrete uniform random variable:
In [ ]:
from pymc3 import DiscreteUniform
with Model() as disaster_model:
switchpoint = DiscreteUniform('switchpoint', lower=0, upper=n_years)
We have done two things here. First, we have created a Model
object; a Model
is a Python object that encapsulates all of the variables that comprise our theoretical model, keeping them in a single container so that they may be used as a unit. After a Model
is created, we can populate it with all of the model components that we specified when we wrote the model down.
Notice that the Model
above was declared using a with
statement. This expression is used to define a Python idiom known as a context manager. Context managers, in general, are used to manage resources of some kind within a program. In this case, our resource is a Model
, and we would like to add variables to it so that we can fit our statistical model. The key characteristic of the context manager is that the resources it manages are only defined within the indented block corresponding to the with
statement. PyMC uses this idiom to automatically add defined variables to a model. Thus, any variable we define is automatically added to the Model
, without having to explicitly add it. This avoids the repetitive syntax of add
methods/functions that you see in some machine learning packages:
model.add(a_variable)
model.add(another_variable)
model.add(yet_another_variable)
model.add(and_again)
model.add(please_kill_me_now)
...
In fact, PyMC variables cannot be defined without a corresponding Model
:
In [ ]:
foo = DiscreteUniform('foo', lower=0, upper=10)
However, variables can be explicitly added to models without the use of a context manager, via the variable's optional model
argument.
disaster_model = Model()
switchpoint = DiscreteUniform('switchpoint', lower=0, upper=110, model=disaster_model)
Or, if we just want a discrete uniform distribution, and do not need to use it in a PyMC3 model necessarily, we can create one using the dist
classmethod.
In [ ]:
x = DiscreteUniform.dist(lower=0, upper=100)
In [ ]:
x
DiscreteUniform
is an object that represents uniformly-distributed discrete variables. Use of this distribution
suggests that we have no preference a priori regarding the location of the switchpoint; all values are equally likely.
PyMC3 includes most of the common random variable distributions used for statistical modeling. For example, the following discrete random variables are available.
In [ ]:
from pymc3 import discrete
discrete.__all__
By having a library of variables that represent statistical distributions, users are relieved of having to code distrbutions themselves.
Similarly, we can create the exponentially-distributed variables early_mean
and late_mean
for the early and late Poisson rates, respectively (also in the context of the model distater_model
):
In [ ]:
from pymc3 import Exponential
with disaster_model:
early_mean = Exponential('early_mean', 1)
late_mean = Exponential('late_mean', 1)
In this instance, we are told that the variables are being transformed. In PyMC3, variables with purely positive priors like Exponential
are transformed with a log function. This makes sampling more robust. Behind the scenes, a variable in the unconstrained space (named <variable name>_log
) is added to the model for sampling. In this model this happens behind the scenes. Variables with priors that constrain them on two sides, like Beta
or Uniform
(continuous), are also transformed to be unconstrained but with a log odds transform.
Next, we define the variable rate
, which selects the early rate early_mean
for times before switchpoint
and the late rate late_mean
for times after switchpoint
. We create rate
using the switch
function, which returns early_mean
when the switchpoint is larger than (or equal to) a particular year, and late_mean
otherwise.
In [ ]:
from pymc3 import switch
with disaster_model:
rate = switch(switchpoint >= np.arange(n_years), early_mean, late_mean)
The last step is to define the data likelihood, or sampling distribution. In this case, our measured outcome is the number of disasters in each year, disasters
. This is a stochastic variable but unlike early_mean
and late_mean
we have observed its value. To express this, we set the argument observed
to the observed sequence of disasters. This tells PyMC that this distribution's value is fixed, and should not be changed:
In [ ]:
from pymc3 import Poisson
with disaster_model:
disasters = Poisson('disasters', mu=rate, observed=disasters_data)
The model that we specified at the top of the page has now been fully implemented in PyMC3. Let's have a look at the model's attributes to see what we have.
The stochastic nodes in the model are identified in the vars
(i.e. variables) attribute:
In [ ]:
disaster_model.vars
The last two variables are the log-transformed versions of the early and late rate parameters. The original variables have become deterministic nodes in the model, since they only represent values that have been back-transformed from the transformed variable, which has been subject to fitting or sampling.
In [ ]:
disaster_model.deterministics
You might wonder why rate
, which is a deterministic component of the model, is not in this list. This is because, unlike the other components of the model, rate
has not been given a name and given a formal PyMC data structure. It is essentially an intermediate calculation in the model, implying that we are not interested in its value when it comes to summarizing the output from the model. Most PyMC objects have a name assigned; these names are used for storage and post-processing:
If we wish to include rate
in our output, we need to make it a Deterministic
object, and give it a name:
In [ ]:
from pymc3 import Deterministic
with disaster_model:
rate = Deterministic('rate', switch(switchpoint >= np.arange(n_years), early_mean, late_mean))
Now, rate
is included in the Model
's deterministics list, and the model will retain its samples during MCMC sampling, for example.
In [ ]:
disaster_model.deterministics
Why are data and unknown variables represented by the same object?
Since its represented by PyMC random variable object,
disasters
is defined by its dependence on its parentrate
even though its value is fixed. This isn't just a quirk of PyMC's syntax; Bayesian hierarchical notation itself makes no distinction between random variables and data. The reason is simple: to use Bayes' theorem to compute the posterior, we require the likelihood. Even thoughdisasters
's value is known and fixed, we need to formally assign it a probability distribution as if it were a random variable. Remember, the likelihood and the probability function are essentially the same, except that the former is regarded as a function of the parameters and the latter as a function of the data. This point can be counterintuitive at first, as many peoples' instinct is to regard data as fixed a priori and unknown variables as dependent on the data.One way to understand this is to think of statistical models as predictive models for data, or as models of the processes that gave rise to data. Before observing the value of
disasters
, we could have sampled from its prior predictive distribution $p(y)$ (i.e. the marginal distribution of the data) as follows:
- Sample
early_mean
,switchpoint
andlate_mean
from their priors.- Sample
disasters
conditional on these values.Even after we observe the value of
disasters
, we need to use this process model to make inferences aboutearly_mean
,switchpoint
andlate_mean
because its the only information we have about how the variables are related.We will see later that we can sample from this fixed stochastic random variable, to obtain predictions after having observed our data.
Each of the built-in statistical variables are subclasses of the generic Distribution
class in PyMC3. The Distribution
carries relevant attributes about the probability distribution, such as the data type (called dtype
), any relevant transformations (transform
, see below), and initial values (init_value
).
In [ ]:
disasters.dtype
In [ ]:
early_mean.init_value
PyMC's built-in distribution variables can also be used to generate random values from that variable. For example, the switchpoint
, which is a discrete uniform random variable, can generate random draws:
In [ ]:
plt.hist(switchpoint.random(size=1000))
As we noted earlier, some variables have undergone transformations prior to sampling. Such variables will have transformed
attributes that points to the variable that it has been transformed to.
In [ ]:
early_mean.transformed
Variables will usually have an associated distribution, as determined by the constructor used to create it. For example, the switchpoint
variable was created by calling DiscreteUniform()
. Hence, its distribution is DiscreteUniform
:
In [ ]:
switchpoint.distribution
As with all Python objects, the underlying type of a variable can be exposed with the type()
function:
In [ ]:
type(switchpoint)
In [ ]:
type(disasters)
We will learn more about these types in an upcoming section.
In [ ]:
switchpoint.logp({'switchpoint':55, 'early_mean_log':1, 'late_mean_log':1})
For vector-valued variables like disasters
, the logp
attribute returns the sum of the logarithms of
the joint probability or density of all elements of the value.
In [ ]:
disasters.logp({'switchpoint':55, 'early_mean_log':1, 'late_mean_log':1})
Though we created the variables in disaster_model
using well-known probability distributions that are available in PyMC3, its possible to create custom distributions by wrapping functions that compute an arbitrary log-probability using the DensityDist
function. For example, our initial example showed an exponential survival function, which accounts for censored data. If we pass this function as the logp
argument for DensityDist
, we can use it as the data likelihood in a survival model:
def logp(failure, value):
return (failure * log(lam) - lam * value).sum()
x = DensityDist('x', logp, observed={'failure':failure, 'value':t})
Users are thus not limited to the set of of statistical distributions provided by PyMC.
PyMC3's sample
function will fit probability models (linked collections of variables) like ours using Markov chain Monte Carlo (MCMC) sampling. Unless we manually assign particular algorithms to variables in our model, PyMC will assign algorithms that it deems appropriate (it usually does a decent job of this):
In [ ]:
with disaster_model:
trace = sample(2000)
This returns the Markov chain of draws from the model in a data structure called a trace.
In [ ]:
trace
The sample()
function always takes at least one argument, draws
, which specifies how many samples to draw. However, there are a number of additional optional arguments that are worth knowing about:
In [ ]:
help(sample)
The step
argument is what allows users to manually override the sampling algorithms used to fit the model. For example, if we wanted to use a slice sampler to sample the early_mean
and late_mean
variables, we could specify it:
In [ ]:
from pymc3 import Slice
with disaster_model:
trace = sample(1000, step=Slice(vars=[early_mean, late_mean]))
In [ ]:
trace['late_mean']
The trace can also be sliced using the NumPy array slice [start:stop:step]
.
In [ ]:
trace['late_mean', -5:]
In [ ]:
plt.hist(trace['late_mean']);
PyMC has its own plotting functionality dedicated to plotting MCMC output. For example, we can obtain a time series plot of the trace and a histogram using traceplot
:
In [ ]:
from pymc3 import traceplot
traceplot(trace[500:], varnames=['early_mean', 'late_mean', 'switchpoint']);
The upper left-hand pane of each figure shows the temporal series of the samples from each parameter, while below is an autocorrelation plot of the samples. The right-hand pane shows a histogram of the trace. The trace is useful for evaluating and diagnosing the algorithm's performance, while the histogram is useful for visualizing the posterior.
For a non-graphical summary of the posterior, simply call the stats
method.
In [ ]:
from pymc3 import summary
summary(trace[500:], varnames=['early_mean', 'late_mean'])