In [1]:
%matplotlib inline
import numpy as np
import theano.tensor as tt
import pymc3 as pm
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_context('notebook')
In [2]:
with pm.Model() as model:
# Model definition
pass
We discuss RVs further below but let's create a simple model to explore the Model
class.
In [3]:
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1)
obs = pm.Normal('obs', mu=mu, sd=1, observed=np.random.randn(100))
In [4]:
model.basic_RVs
Out[4]:
In [5]:
model.free_RVs
Out[5]:
In [6]:
model.observed_RVs
Out[6]:
In [7]:
model.logp({'mu': 0})
Out[7]:
Warning
It's worth highlighting one of the counter-intuitive design choices with logp.
The API makes the logp
look like an attribute, when it actually puts together a function based on the current state of the model.
The current design is super maintainable, does terrible if the state stays constant, and great if the state keeps changing, for reasons of design we assume that Model
isn't static, in fact it's best in our experience and avoids bad results.
If you need to use logp
in an inner loop and it needs to be static, simply use something like logp = model.logp
below. You can see the caching effect with the speed up below.
In [8]:
%timeit model.logp({mu: 0.1})
logp = model.logp
%timeit logp({mu: 0.1})
Every probabilistic program consists of observed and unobserved Random Variables (RVs). Observed RVs are defined via likelihood distributions, while unobserved RVs are defined via prior distributions. In PyMC3, probability distributions are available from the main module space:
In [9]:
help(pm.Normal)
In the PyMC3 module, the structure for probability distributions looks like this:
In [10]:
dir(pm.distributions.mixture)
Out[10]:
Every unobserved RV has the following calling signature: name (str), parameter keyword arguments. Thus, a normal prior can be defined in a model context like this:
In [11]:
with pm.Model():
x = pm.Normal('x', mu=0, sd=1)
As with the model, we can evaluate its logp:
In [12]:
x.logp({'x': 0})
Out[12]:
Observed RVs are defined just like unobserved RVs but require data to be passed into the observed
keyword argument:
In [13]:
with pm.Model():
obs = pm.Normal('x', mu=0, sd=1, observed=np.random.randn(100))
observed
supports lists, numpy.ndarray
, theano
and pandas
data structures.
PyMC3 allows you to freely do algebra with RVs in all kinds of ways:
In [14]:
with pm.Model():
x = pm.Normal('x', mu=0, sd=1)
y = pm.Gamma('y', alpha=1, beta=1)
plus_2 = x + 2
summed = x + y
squared = x**2
sined = pm.math.sin(x)
While these transformations work seamlessly, its results are not stored automatically. Thus, if you want to keep track of a transformed variable, you have to use pm.Determinstic
:
In [15]:
with pm.Model():
x = pm.Normal('x', mu=0, sd=1)
plus_2 = pm.Deterministic('x plus 2', x + 2)
Note that plus_2
can be used in the identical way to above, we only tell PyMC3 to keep track of this RV for us.
In [16]:
with pm.Model() as model:
x = pm.Uniform('x', lower=0, upper=1)
When we look at the RVs of the model, we would expect to find x
there, however:
In [17]:
model.free_RVs
Out[17]:
x_interval__
represents x
transformed to accept parameter values between -inf and +inf. In the case of an upper and a lower bound, a LogOdd
s transform is applied. Sampling in this transformed space makes it easier for the sampler. PyMC3 also keeps track of the non-transformed, bounded parameters. These are common determinstics (see above):
In [18]:
model.deterministics
Out[18]:
When displaying results, PyMC3 will usually hide transformed parameters. You can pass the include_transformed=True
parameter to many functions to see the transformed parameters that are used for sampling.
You can also turn transforms off:
In [19]:
with pm.Model() as model:
x = pm.Uniform('x', lower=0, upper=1, transform=None)
print(model.free_RVs)
In [20]:
with pm.Model():
x = [pm.Normal('x_{}'.format(i), mu=0, sd=1) for i in range(10)] # bad
However, even though this works it is quite slow and not recommended. Instead, use the shape
kwarg:
In [21]:
with pm.Model() as model:
x = pm.Normal('x', mu=0, sd=1, shape=10) # good
x
is now a random vector of length 10. We can index into it or do linear algebra operations on it:
In [22]:
with model:
y = x[0] * x[1] # full indexing is supported
x.dot(x.T) # Linear algebra is supported
In [23]:
with pm.Model():
x = pm.Normal('x', mu=0, sd=1, shape=5)
x.tag.test_value
Out[23]:
In [24]:
with pm.Model():
x = pm.Normal('x', mu=0, sd=1, shape=5, testval=np.random.randn(5))
x.tag.test_value
Out[24]:
This technique is quite useful to identify problems with model specification or initialization.
Once we have defined our model, we have to perform inference to approximate the posterior distribution. PyMC3 supports two broad classes of inference: sampling and variational inference.
The main entry point to MCMC sampling algorithms is via the pm.sample()
function. By default, this function tries to auto-assign the right sampler(s) and auto-initialize if you don't pass anything.
In [25]:
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1)
obs = pm.Normal('obs', mu=mu, sd=1, observed=np.random.randn(100))
trace = pm.sample(1000, tune=500)
As you can see, on a continuous model, PyMC3 assigns the NUTS sampler, which is very efficient even for complex models. PyMC3 also runs variational inference (i.e. ADVI) to find good starting parameters for the sampler. Here we draw 1000 samples from the posterior and allow the sampler to adjust its parameters in an additional 500 iterations. These 500 samples are discarded by default:
In [26]:
len(trace)
Out[26]:
You can also run multiple chains in parallel using the njobs
kwarg:
In [27]:
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1)
obs = pm.Normal('obs', mu=mu, sd=1, observed=np.random.randn(100))
trace = pm.sample(njobs=4)
Note, that we are now drawing 2000 samples, 500 samples for 4 chains each. The 500 tuning samples are discarded by default.
In [28]:
trace['mu'].shape
Out[28]:
In [29]:
trace.nchains
Out[29]:
In [30]:
trace.get_values('mu', chains=1).shape # get values of a single chain
Out[30]:
PyMC3, offers a variety of other samplers, found in pm.step_methods
.
In [31]:
list(filter(lambda x: x[0].isupper(), dir(pm.step_methods)))
Out[31]:
Commonly used step-methods besides NUTS are Metropolis
and Slice
. For almost all continuous models, NUTS
should be preferred. There are hard-to-sample models for which NUTS
will be very slow causing many users to use Metropolis
instead. This practice, however, is rarely successful. NUTS is fast on simple models but can be slow if the model is very complex or it is badly initialized. In the case of a complex model that is hard for NUTS, Metropolis, while faster, will have a very low effective sample size or not converge properly at all. A better approach is to instead try to improve initialization of NUTS, or reparameterize the model.
For completeness, other sampling methods can be passed to sample:
In [32]:
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1)
obs = pm.Normal('obs', mu=mu, sd=1, observed=np.random.randn(100))
step = pm.Metropolis()
trace = pm.sample(1000, step=step)
You can also assign variables to different step methods.
In [33]:
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1)
sd = pm.HalfNormal('sd', sd=1)
obs = pm.Normal('obs', mu=mu, sd=sd, observed=np.random.randn(100))
step1 = pm.Metropolis(vars=[mu])
step2 = pm.Slice(vars=[sd])
trace = pm.sample(10000, step=[step1, step2], njobs=4)
In [34]:
pm.traceplot(trace);
Another common metric to look at is R-hat, also known as the Gelman-Rubin statistic:
In [35]:
pm.gelman_rubin(trace)
Out[35]:
These are also part of the forestplot
:
In [36]:
pm.forestplot(trace);
Finally, for a plot of the posterior that is inspired by the book Doing Bayesian Data Analysis, you can use the:
In [37]:
pm.plot_posterior(trace);
For high-dimensional models it becomes cumbersome to look at all parameter's traces. When using NUTS
we can look at the energy plot to assess problems of convergence:
In [38]:
with pm.Model() as model:
x = pm.Normal('x', mu=0, sd=1, shape=100)
trace = pm.sample(njobs=4)
pm.energyplot(trace);
In [39]:
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1)
sd = pm.HalfNormal('sd', sd=1)
obs = pm.Normal('obs', mu=mu, sd=sd, observed=np.random.randn(100))
approx = pm.fit()
The returned Approximation
object has various capabilities, like drawing samples from the approximated posterior, which we can analyse like a regular sampling run:
In [40]:
approx.sample(500)
Out[40]:
The variational
submodule offers a lot of flexibility in which VI to use and follows an object oriented design. For example, full-rank ADVI estimates a full covariance matrix:
In [41]:
mu = pm.floatX([0., 0.])
cov = pm.floatX([[1, .5], [.5, 1.]])
with pm.Model() as model:
pm.MvNormal('x', mu=mu, cov=cov, shape=2)
approx = pm.fit(method='fullrank_advi')
An equivalent expression using the object-oriented interface is:
In [42]:
with pm.Model() as model:
pm.MvNormal('x', mu=mu, cov=cov, shape=2)
approx = pm.FullRankADVI().fit()
In [43]:
plt.figure()
trace = approx.sample(10000)
sns.kdeplot(trace['x'])
Out[43]:
Stein Variational Gradient Descent (SVGD) uses particles to estimate the posterior:
In [44]:
w = pm.floatX([.2, .8])
mu = pm.floatX([-.3, .5])
sd = pm.floatX([.1, .1])
with pm.Model() as model:
pm.NormalMixture('x', w=w, mu=mu, sd=sd)
approx = pm.fit(method=pm.SVGD(n_particles=200, jitter=1.))
In [45]:
plt.figure()
trace = approx.sample(10000)
sns.distplot(trace['x']);
For more information on variational inference, see these examples.
In [46]:
data = np.random.randn(100)
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1)
sd = pm.HalfNormal('sd', sd=1)
obs = pm.Normal('obs', mu=mu, sd=sd, observed=data)
trace = pm.sample()
In [47]:
with model:
post_pred = pm.sample_ppc(trace, samples=500, size=len(data))
sample_ppc()
returns a dict with a key for every observed node:
In [48]:
post_pred['obs'].shape
Out[48]:
In [49]:
plt.figure()
ax = sns.distplot(post_pred['obs'].mean(axis=1), label='Posterior predictive means')
ax.axvline(data.mean(), color='r', ls='--', label='True mean')
ax.legend()
Out[49]:
In many cases you want to predict on unseen / hold-out data. This is especially relevant in Probabilistic Machine Learning and Bayesian Deep Learning. While we plan to improve the API in this regard, this can currently be achieved with a theano.shared
variable. These are theano tensors whose values can be changed later. Otherwise they can be passed into PyMC3 just like any other numpy array or tensor.
In [50]:
import theano
x = np.random.randn(100)
y = x > 0
x_shared = theano.shared(x)
y_shared = theano.shared(y)
with pm.Model() as model:
coeff = pm.Normal('x', mu=0, sd=1)
logistic = pm.math.sigmoid(coeff * x_shared)
pm.Bernoulli('obs', p=logistic, observed=y_shared)
trace = pm.sample()
Now assume we want to predict on unseen data. For this we have to change the values of x_shared
and y_shared
. Theoretically we don't need to set y_shared
as we want to predict it but it has to match the shape of x_shared
.
In [51]:
x_shared.set_value([-1, 0, 1.])
y_shared.set_value([0, 0, 0]) # dummy values
with model:
post_pred = pm.sample_ppc(trace, samples=500)
In [52]:
post_pred['obs'].mean(axis=0)
Out[52]: