A fairly minimal reproducable example of Model Selection using DIC and WAIC.
stats.dic
) and WAIC (stats.waic
) are new additions to PyMC3, so this example shows their usage in a more concrete fashion, also usage of the new glm
submodule.For more information on Model Selection in PyMC3, and about DIC and WAIC, you could start with:
Contents
Note:
$> theano-cache clear
and rerunning the notebook.Package Requirements (shown as a conda-env YAML):
$> less conda_env_pymc3_examples.yml
name: pymc3_examples
channels:
- defaults
dependencies:
- python=3.4
- ipython
- ipython-notebook
- ipython-qtconsole
- numpy
- scipy
- matplotlib
- pandas
- seaborn
- patsy
- pip
$> conda env create --file conda_env_pymc3_examples.yml
$> source activate pymc3_examples
$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3
In [1]:
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
In [2]:
from collections import OrderedDict
from time import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.optimize import fmin_powell
from scipy import integrate
import pymc3 as pm
import theano as thno
import theano.tensor as T
from IPython.html.widgets import interactive, fixed
# configure some basic options
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.notebook_repr_html', True)
plt.rcParams['figure.figsize'] = 12, 8
rndst = np.random.RandomState(0)
In [43]:
def generate_data(n=20, p=0, a=1, b=1, c=0, latent_sigma_y=20):
'''
Create a toy dataset based on a very simple model that we might
imagine is a noisy physical process:
1. random x values within a range
2. latent error aka inherent noise in y
3. optionally create labelled outliers with larger noise
Model form: y ~ a + bx + cx^2 + e
NOTE: latent_sigma_y is used to create a normally distributed,
'latent error' aka 'inherent noise' in the 'physical process'
generating thses values, rather than experimental measurement error.
Please don't use the returned `latent_error` values in inferential
models, it's returned in e dataframe for interest only.
'''
df = pd.DataFrame({'x':rndst.choice(np.arange(100),n,replace=False)})
## create linear or quadratic model
df['y'] = a + b*(df['x']) + c*(df['x'])**2
## create latent noise and marked outliers
df['latent_error'] = rndst.normal(0,latent_sigma_y,n)
df['outlier_error'] = rndst.normal(0,latent_sigma_y*10,n)
df['outlier'] = rndst.binomial(1,p,n)
## add noise, with extreme noise for marked outliers
df['y'] += ((1-df['outlier']) * df['latent_error'])
df['y'] += (df['outlier'] * df['outlier_error'])
## round
for col in ['y','latent_error','outlier_error','x']:
df[col] = np.round(df[col],3)
## add label
df['source'] = 'linear' if c == 0 else 'quadratic'
## create simple linspace for plotting true model
plotx = np.linspace(df['x'].min() - np.ptp(df['x'])*.1
,df['x'].max() + np.ptp(df['x'])*.1, 100)
ploty = a + b*plotx + c*plotx**2
dfp = pd.DataFrame({'x':plotx, 'y':ploty})
return df, dfp
def interact_dataset(n=20, p=0, a=-30, b=5, c=0, latent_sigma_y=20):
'''
Convenience function:
Interactively generate dataset and plot
'''
df, dfp = generate_data(n, p, a, b, c, latent_sigma_y)
g = sns.FacetGrid(df, size=8, hue='outlier', hue_order=[True,False]
,palette=sns.color_palette('Set1'), legend_out=False)
_ = g.map(plt.errorbar, 'x', 'y', 'latent_error', marker="o"
,ms=10, mec='w', mew=2, ls='', elinewidth=0.7).add_legend()
_ = plt.plot(dfp['x'], dfp['y'], '--', alpha=0.8)
plt.subplots_adjust(top=0.92)
_ = g.fig.suptitle('Sketch of Data Generation ({})'.format(df['source'][0])
,fontsize=16)
def plot_datasets(df_lin, df_quad, dfp_lin, dfp_quad):
'''
Convenience function:
Plot the two generated datasets in facets with generative model
'''
df = pd.concat((df_lin, df_quad), axis=0)
dfp_lin, dfp_quad
g = sns.FacetGrid(col='source', hue='source', data=df, size=6
,sharey=False, legend_out=False)
_ = g.map(plt.scatter, 'x', 'y', alpha=0.7, s=100, lw=2, edgecolor='w')
_ = g.axes[0][0].plot(dfp_lin['x'], dfp_lin['y'], '--', alpha=0.6)
_ = g.axes[0][1].plot(dfp_quad['x'], dfp_quad['y'], '--', alpha=0.6)
def plot_traces(traces, retain=1000):
'''
Convenience function:
Plot traces with overlaid means and values
'''
ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces[-retain:]).iterrows()})
for i, mn in enumerate(pm.df_summary(traces[-retain:])['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data'
,xytext=(5,10), textcoords='offset points', rotation=90
,va='bottom', fontsize='large', color='#AA0022')
def create_poly_modelspec(k=1):
'''
Convenience function:
Create a polynomial modelspec string for patsy
'''
return ('y ~ 1 + x ' + ' '.join(['+ np.power(x,{})'.format(j)
for j in range(2,k+1)])).strip()
def run_models(df, upper_order=5):
'''
Convenience function:
Fit a range of pymc3 models of increasing polynomial complexity.
Suggest limit to max order 5 since calculation time is exponential.
'''
models, traces = OrderedDict(), OrderedDict()
for k in range(1,upper_order+1):
nm = 'k{}'.format(k)
fml = create_poly_modelspec(k)
with pm.Model() as models[nm]:
print('\nRunning: {}'.format(nm))
pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Normal())
# For speed, we're using Metropolis here
traces[nm] = pm.sample(5000, pm.Metropolis())[1000::5]
return models, traces
def plot_posterior_cr(models, traces, rawdata, xlims,
datamodelnm='linear', modelnm='k1'):
'''
Convenience function:
Plot posterior predictions with credible regions shown as filled areas.
'''
## Get traces and calc posterior prediction for npoints in x
npoints = 100
mdl = models[modelnm]
trc = pm.trace_to_dataframe(traces[modelnm][-1000:])
trc = trc[[str(v) for v in mdl.cont_vars[:-1]]]
ordr = int(modelnm[-1:])
x = np.linspace(xlims[0], xlims[1], npoints).reshape((npoints,1))
pwrs = np.ones((npoints,ordr+1)) * np.arange(ordr+1)
X = x ** pwrs
cr = np.dot(X,trc.T)
## Calculate credible regions and plot over the datapoints
dfp = pd.DataFrame(np.percentile(cr,[2.5, 25, 50, 75, 97.5], axis=1).T
,columns=['025','250','500','750','975'])
dfp['x'] = x
pal = sns.color_palette('Greens')
f, ax1d = plt.subplots(1,1, figsize=(7,7))
f.suptitle('Posterior Predictive Fit -- Data: {} -- Model: {}'.format(
datamodelnm, modelnm), fontsize=16)
plt.subplots_adjust(top=0.95)
ax1d.fill_between(dfp['x'], dfp['025'], dfp['975'], alpha=0.5
,color=pal[1], label='CR 95%')
ax1d.fill_between(dfp['x'], dfp['250'], dfp['750'], alpha=0.5
,color=pal[4], label='CR 50%')
ax1d.plot(dfp['x'], dfp['500'], alpha=0.6, color=pal[5], label='Median')
_ = plt.legend()
_ = ax1d.set_xlim(xlims)
_ = sns.regplot(x='x', y='y', data=rawdata, fit_reg=False
,scatter_kws={'alpha':0.7,'s':100, 'lw':2,'edgecolor':'w'}, ax=ax1d)
Throughout the rest of the Notebook, we'll use two toy datasets created by a linear and a quadratic model respectively, so that we can better evaluate the fit of the model selection.
Right now, lets use an interactive session to play around with the data generation function in this Notebook, and get a feel for the possibilities of data we could generate.
$$y_{i} = a + bx_{i} + cx_{i}^{2} + \epsilon_{i}$$where:
$i \in n$ datapoints
$\epsilon \sim \mathcal{N}(0,latent\_sigma\_y)$
NOTE on outliers:
p
to set the (approximate) proportion of 'outliers' under a bernoulli distribution.latent_sigma_y
GLM-robust-with-outlier-detection.ipynb
In [4]:
interactive(interact_dataset, n=[5,50,5], p=[0,.5,.05], a=[-50,50]
,b=[-10,10], c=[-3,3], latent_sigma_y=[0,1000,50])
Observe:
latent_error
in errorbars, but this is for interest only, since this shows the inherent noise in whatever 'physical process' we imagine created the data.We can use the above interactive plot to get a feel for the effect of the params. Now we'll create 2 fixed datasets to use for the remainder of the Notebook.
In [5]:
n = 12
df_lin, dfp_lin = generate_data(n=n, p=0, a=-30, b=5, c=0, latent_sigma_y=40)
df_quad, dfp_quad = generate_data(n=n, p=0, a=-200, b=2, c=3, latent_sigma_y=500)
In [6]:
plot_datasets(df_lin, df_quad, dfp_lin, dfp_quad)
Observe:
df_lin
and df_quad
created by a linear model and quadratic model respectively.
In [7]:
dfs_lin = df_lin.copy()
dfs_lin['x'] = (df_lin['x'] - df_lin['x'].mean()) / df_lin['x'].std()
dfs_quad = df_quad.copy()
dfs_quad['x'] = (df_quad['x'] - df_quad['x'].mean()) / df_quad['x'].std()
In [8]:
dfs_lin_xlims = (dfs_lin['x'].min() - np.ptp(dfs_lin['x'])/10,
dfs_lin['x'].max() + np.ptp(dfs_lin['x'])/10)
dfs_lin_ylims = (dfs_lin['y'].min() - np.ptp(dfs_lin['y'])/10,
dfs_lin['y'].max() + np.ptp(dfs_lin['y'])/10)
dfs_quad_ylims = (dfs_quad['y'].min() - np.ptp(dfs_quad['y'])/10,
dfs_quad['y'].max() + np.ptp(dfs_quad['y'])/10)
This linear model is really simple and conventional, an OLS with L2 constraints (Ridge Regression):
$$y = a + bx + \epsilon$$
In [9]:
with pm.Model() as mdl_ols:
## define Normal priors to give Ridge regression
b0 = pm.Normal('b0', mu=0, sd=100)
b1 = pm.Normal('b1', mu=0, sd=100)
## define Linear model
yest = b0 + b1 * df_lin['x']
## define Normal likelihood with HalfCauchy noise (fat tails, equiv to HalfT 1DoF)
sigma_y = pm.HalfCauchy('sigma_y', beta=10)
likelihood = pm.Normal('likelihood', mu=yest, sd=sigma_y, observed=df_lin['y'])
## sample using NUTS
traces_ols = pm.sample(2000)
In [10]:
plot_traces(traces_ols, retain=1000)
Observe:
PyMC3 has a quite recently developed method - glm
- for defining models using a patsy
-style formula syntax. This seems really useful, especially for defining simple regression models in fewer lines of code.
I couldn't find a direct comparison in the the examples, so before I launch into using glm
for the rest of the Notebook, here's the same OLS model as above, defined using glm
.
In [11]:
with pm.Model() as mdl_ols_glm:
# setup model with Normal likelihood (which uses HalfCauchy for error prior)
pm.glm.GLM.from_formula('y ~ 1 + x', df_lin, family=pm.glm.families.Normal())
traces_ols_glm = pm.sample(2000)
In [12]:
plot_traces(traces_ols_glm, retain=1000)
Observe:
The output parameters are of course named differently to the custom naming before. Now we have:
b0 == Intercept
b1 == x
sigma_y_log == sd_log
sigma_y == sd
glm
-defined model appears to behave in a very similar way, and finds the same parameter values as the conventionally-defined model - any differences are due to the random nature of the sampling.glm
syntax for further models below, since it allows us to create a small model factory very easily.Back to the real purpose of this Notebook: demonstrate model selection.
First, let's create and run a set of polynomial models on each of our toy datasets. By default this is for models of order 1 to 5.
In [44]:
models_lin, traces_lin = run_models(dfs_lin, 5)
In [45]:
models_quad, traces_quad = run_models(dfs_quad, 5)
In [46]:
dfll = pd.DataFrame(index=['k1','k2','k3','k4','k5'], columns=['lin','quad'])
dfll.index.name = 'model'
for nm in dfll.index:
dfll.loc[nm,'lin'] =-models_lin[nm].logp(pm.df_summary(traces_lin[nm], varnames=traces_lin[nm].varnames)['mean'].to_dict())
dfll.loc[nm,'quad'] =-models_quad[nm].logp(pm.df_summary(traces_quad[nm], varnames=traces_quad[nm].varnames)['mean'].to_dict())
dfll = pd.melt(dfll.reset_index(), id_vars=['model'], var_name='poly', value_name='log_likelihood')
In [47]:
g = sns.factorplot(x='model', y='log_likelihood', col='poly', hue='poly'
,data=dfll, kind='bar', size=6)
Observe:
Just for the linear, generated data, lets take an interactive look at the posterior predictive fit for the models k1 through k5.
As indicated by the likelhood plots above, the higher-order polynomial models exhibit some quite wild swings in the function in order to (over)fit the data
In [30]:
interactive(plot_posterior_cr, models=fixed(models_lin), traces=fixed(traces_lin)
,rawdata=fixed(dfs_lin), xlims=fixed(dfs_lin_xlims), datamodelnm=fixed('linear')
,modelnm = ['k1','k2','k3','k4','k5'])
The Deviance Information Criterion (DIC) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases, see the discussion worth reading in (Speigelhalter et al 2014)
DIC has recently been added to PyMC3, so lets see what it tells us about our model fits for both datasets.
In [48]:
dftrc_lin = pm.trace_to_dataframe(traces_lin['k1'], include_transformed=True)
trc_lin_logp = dftrc_lin.apply(lambda x: models_lin['k1'].logp(x.to_dict()), axis=1)
mean_deviance = -2 * trc_lin_logp.mean(0)
mean_deviance
Out[48]:
In [49]:
deviance_at_mean = -2 * models_lin['k1'].logp(dftrc_lin.mean(0).to_dict())
deviance_at_mean
Out[49]:
In [50]:
dic_k1 = 2 * mean_deviance - deviance_at_mean
dic_k1
Out[50]:
In [51]:
pm.stats.dic(model=models_lin['k1'], trace=traces_lin['k1'])
Out[51]:
Observe:
In [52]:
dfdic = pd.DataFrame(index=['k1','k2','k3','k4','k5'], columns=['lin','quad'])
dfdic.index.name = 'model'
for nm in dfdic.index:
dfdic.loc[nm, 'lin'] = pm.stats.dic(traces_lin[nm], models_lin[nm])
dfdic.loc[nm, 'quad'] = pm.stats.dic(traces_quad[nm], models_quad[nm])
dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='dic')
g = sns.factorplot(x='model', y='dic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)
Observe
The Widely Applicable Bayesian Information Criterion (WBIC), a.k.a the Watanabe - Akaike Information Criterion (WAIC) is another simple option for calculating the goodness-of-fit of amodel using numerical techniques. See (Watanabe 2013) for details.
WAIC has also recently been added to PyMC3, so lets see what it tells us about our model fits for both datasets.
In [53]:
pm.stats.waic(model=models_lin['k1'], trace=traces_lin['k1'])
Out[53]:
Observe:
In [54]:
dfwaic = pd.DataFrame(index=['k1','k2','k3','k4','k5'], columns=['lin','quad'])
dfwaic.index.name = 'model'
for nm in dfwaic.index:
dfwaic.loc[nm, 'lin'] = pm.stats.waic(traces_lin[nm], models_lin[nm])[0]
dfwaic.loc[nm, 'quad'] = pm.stats.waic(traces_quad[nm], models_quad[nm])[0]
dfwaic = pd.melt(dfwaic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic')
g = sns.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfwaic, kind='bar', size=6)
Observe
For these particular models and data, I would prefer to use the DIC scores in order to choose models.
Following text lifted directly from JakeVDP blogpost
The Bayesian approach proceeds very differently. Recall that the Bayesian model involves computing the odds ratio between two models:
$$O_{21}=\frac{P(M_{2} \;|\; D)}{P(M_{1} \;|\; D)}=\frac{P(D \;|\; M_{2})}{P(D \;|\; M_{1})}\frac{P(M_{2})}{P(M_{1})}$$Here the ratio $\frac{P(M2)}{P(M1)}$ is the prior odds ratio, and is often assumed to be equal to 1 if no compelling prior evidence favors one model over another. The ratio $\frac{P(D \;|\; M2)}{P(D \;|\; M1)}$ is the Bayes factor, and is the key to Bayesian model selection.
The Bayes factor can be computed by evaluating the integral over the parameter likelihood:
$$P(D \;|\; M)=\int_{\Omega}P(D \;|\; \theta,M) \; P(\theta \;|\; M) \;d\theta$$This integral is over the entire parameter space of the model, and thus can be extremely computationally intensive, especially as the dimension of the model grows beyond a few.
Example originally contributed by Jonathan Sedar 2016-01-09 github.com/jonsedar