Bayesian data analysis - PyStan demos

Author: Aki Vehtari, Tuomas Sivula, Co-authors: Lassi Meronen, Pellervo Ruponen

Last modified 2018-Dec.

License: CC-BY

This notebook contains several examples of how to use Stan in python with PyStan. This notebook assumes basic knowledge of Bayesian inference and MCMC. The Stan models are stored in separate .stan-files. The examples are related to Bayesian data analysis course by Aki Vehtari.

Setting up the PyStan environment

We begin by importing the PyStan module as well at the matplotlib module for basic graphics facilities.

We also import some utilities by Michael Betancourt and introduced in PyStan workflow case study in module stan_utility.


In [73]:
%matplotlib inline

import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt

import pystan
import arviz as az # For visualization and loo

In [74]:
# add utilities directory to path
import os, sys
util_path = os.path.abspath(os.path.join(os.path.pardir, 'utilities_and_data'))
if util_path not in sys.path and os.path.exists(util_path):
    sys.path.insert(0, util_path)

# import from utilities
import stan_utility

In [75]:
# edit default plot settings
plt.rc('font', size=12)
az.style.use('seaborn-darkgrid')
mpl.rcParams['figure.figsize'] = [7.0, 5.4]

Bernoulli model

Toy data with sequence of failures (0) and successes (1). We would like to learn about the unknown probability of success. In following, we denote this probability with theta.


In [76]:
data = dict(N=10, y=[0,1,0,0,1,1,1,0,1,0])

Bernoulli model with a Beta(1,1) (uniform) prior


In [77]:
with open('bern.stan') as file:
    print(file.read())


// Bernoulli model
data {
  int<lower=0> N;
  int<lower=0,upper=1> y[N];
}
parameters {
  real<lower=0,upper=1> theta;
}
model {
  theta ~ beta(1,1);
  y ~ bernoulli(theta);
}

Given the Stan program we then use the compile_model method of stan_utility module to compile the Stan program into a C++ executable. This utility function automatically saves a cached version of the compiled model to the disk for possible future use.


In [78]:
model = stan_utility.compile_model('bern.stan')


Using cached StanModel

Getting the model again with the utility function uses the cached version automatically.


In [79]:
del model
model = stan_utility.compile_model('bern.stan')


Using cached StanModel

Sample from the posterior, show the summary and plot the histogram of the posterior draws. In addition, plot estimated posterior density with arviz posterior plot and 95% credible interval. We recommend explicitly specifying the seed of Stan's random number generator, as we have done here, so that we can reproduce these exactly results in the future, at least when using the same machine, operating system, and interface. This is especially helpful for the more subtle pathologies that may not always be found, which results in seemingly stochastic behavior.


In [80]:
fit = model.sampling(data=data, seed=194838)
print(fit)
samples = fit.extract(permuted=True)

plt.hist(samples['theta'], 33, color='lightblue', edgecolor='grey')
plt.xlabel(r'Probability of success (theta)')
plt.ylabel('Amount of draws')

# Plot with arviz plot_posterior.
#    - round_to: sets accuracy of values in plot
az.plot_posterior(fit, var_names=['theta'], credible_interval=0.95, round_to=2)
plt.xlabel(r'Probability of success (theta)')
plt.ylabel('Relative density');


Inference for Stan model: anon_model_4586f2dc76604848221fafe6413762a9.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

        mean se_mean     sd   2.5%    25%    50%    75%  97.5%  n_eff   Rhat
theta    0.5  3.7e-3   0.14   0.24    0.4    0.5    0.6   0.76   1420    1.0
lp__   -8.84    0.02   0.71 -10.87  -9.04  -8.57  -8.37  -8.32   1690    1.0

Samples were drawn using NUTS at Wed Dec 19 11:19:21 2018.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

Binomial model

Instead of sequence of 0's and 1's, we can summarize the data with the number of experiments and the number successes:


In [81]:
data = dict(N=10, y=7)

And then we use Binomial model with Beta(1,1) prior for the probability of success.


In [82]:
with open('binom.stan') as file:
    print(file.read())


// Binomial model with beta(1,1,) prior
data {
  int<lower=0> N;
  int<lower=0> y;
}
parameters {
  real<lower=0,upper=1> theta;
}
model {
  theta ~ beta(1,1);
  y ~ binomial(N,theta);
}

Sample from the posterior and plot the posterior. The histogram should look similar as in the Bernoulli case, except that now the number of successes is 7 instead of 5.


In [83]:
model = stan_utility.compile_model('binom.stan')
fit = model.sampling(data=data, seed=194838)
samples = fit.extract(permuted=True)
plt.hist(samples['theta'], 33, color='lightblue', edgecolor='grey')
plt.xlabel(r'Probability of success (theta)')
plt.ylabel('Amount of draws')

az.plot_posterior(fit, var_names=['theta'], credible_interval=0.95, round_to=2)
plt.xlabel(r'Probability of success (theta)')
plt.ylabel('Relative density');


Using cached StanModel

Now we re-run the model with a new data (now number of successes being 70 out of 100). The previously compiled Stan program is re-used making the re-use faster.


In [84]:
data = dict(N=100, y=70)
fit = model.sampling(data=data, seed=194838)
samples = fit.extract(permuted=True)
plt.hist(samples['theta'], 33, color='lightblue', edgecolor='grey') # histogram
plt.xlabel(r'Probability of success (theta)')
plt.ylabel('Amount of draws')

az.plot_posterior(fit, var_names=['theta'], credible_interval=0.95, round_to=2)
plt.xlabel(r'Probability of success (theta)')
plt.ylabel('Relative density');


Explicit transformation of variables

In the above examples the probability of success θ was declared as:

real<lower=0,upper=1> theta;

Stan makes automatic transformation of the variable to the unconstrained space using logit transofrmation for interval constrained and log transformation for half constraints.

The following example shows how we can also make an explicit transformation and use binomial_logit function which takes the unconstrained parameter as an argument and uses logit transformation internally. This form can be useful for better numerical stability.


In [85]:
with open('binomb.stan') as file:
    print(file.read())


// Binomial model with a roughly uniform prior for
// the probability of success (theta)
data {
  int<lower=0> N;
  int<lower=0> y;
}
parameters {
  real alpha;
}
transformed parameters {
  real theta;
  theta = inv_logit(alpha);
}
model {
  // roughly auniform prior for the number of successes
  alpha ~ normal(0,1.5);
  y ~ binomial_logit(N,alpha);
}


In [86]:
model = stan_utility.compile_model('binomb.stan')


Using cached StanModel

Here we have used Gaussian prior in the unconstrained space, which produces close to uniform prior for theta.

Sample from the posterior and plot the posterior. The histogram should look similar as with the previous model


In [87]:
data = dict(N=100, y=70)
fit = model.sampling(data=data, seed=194838)
samples = fit.extract(permuted=True)
plt.hist(samples['theta'], 33, color='lightblue', edgecolor='grey')
plt.xlabel(r'Probability of success (theta)')
plt.ylabel('Amount of draws')

az.plot_posterior(fit, var_names=['theta'], credible_interval=0.95, round_to=2)
plt.xlabel(r'Probability of success (theta)')
plt.ylabel('Relative density');


Comparison of two groups with Binomial

An experiment was performed to estimate the effect of beta-blockers on mortality of cardiac patients. A group of patients were randomly assigned to treatment and control groups:

  • out of 674 patients receiving the control, 39 died
  • out of 680 receiving the treatment, 22 died

Data:


In [88]:
data = dict(N1=674, y1=39, N2=680, y2=22)

To analyse whether the treatment is useful, we can use Binomial model for both groups and compute odds-ratio. If the odds-ratio is less than 1, it means that the treatment is useful.


In [89]:
with open('binom2.stan') as file:
    print(file.read())


//  Comparison of two groups with Binomial
data {
  int<lower=0> N1;
  int<lower=0> y1;
  int<lower=0> N2;
  int<lower=0> y2;
}
parameters {
  real<lower=0,upper=1> theta1;
  real<lower=0,upper=1> theta2;
}
model {
  theta1 ~ beta(1,1);
  theta2 ~ beta(1,1);
  y1 ~ binomial(N1,theta1);
  y2 ~ binomial(N2,theta2);
}
generated quantities {
  real oddsratio;
  oddsratio = (theta2/(1-theta2))/(theta1/(1-theta1));
}

Sample from the posterior and plot the posterior for the odds-ratio. We plot the posterior both as a histogram and and as an estimated probability density function. Using ArviZ, we can easily see the probability of the odds-ratio being less than 1.


In [90]:
model = stan_utility.compile_model('binom2.stan')
fit = model.sampling(data=data, seed=194838)
samples = fit.extract(permuted=True)
plt.hist(samples['oddsratio'], 33, color='lightblue', edgecolor='grey')
plt.xlabel('oddsratio')
plt.ylabel('Amount of draws')

# With az.plot_posterior we use parameter ref_val = 1
# since treatment is beneficial if oddsratio is smaller than 1.
az.plot_posterior(fit, var_names=['oddsratio'], credible_interval=0.95, round_to=2, ref_val=1)
plt.xlabel('oddsratio')
plt.ylabel('Relative density')
plt.savefig('odd-ratio.pdf')


Using cached StanModel

Linear Gaussian model

The following file has Kilpisjärvi summer month temperatures 1952-2013:


In [91]:
data_path = os.path.abspath(
    os.path.join(
        os.path.pardir,
        'utilities_and_data',
        'kilpisjarvi-summer-temp.csv'
    )
)
d = np.loadtxt(data_path, dtype=np.double, delimiter=';', skiprows=1)
x = d[:, 0]
y = d[:, 4]
N = len(x)
xpred = 2016

Plot the data


In [92]:
# make slightly wider figure
wide_figsize = plt.rcParams['figure.figsize'].copy()
wide_figsize[0] *= 1.4
plt.figure(figsize=wide_figsize)
plt.scatter(x, y)
plt.xlabel('year')
plt.ylabel('summer temperature at Kilpisjarvi');


To analyse whether the average summer month temperature is rising, we use a linear model with Gaussian model for the unexplained variation.

Gaussian linear model with adjustable priors

The following Stan code allows also setting hyperparameter values as data allowing easier way to use different priors in different analyses:


In [93]:
with open('lin.stan') as file:
    print(file.read())


// Gaussian linear model with adjustable priors
data {
  int<lower=0> N; // number of data points
  vector[N] x; //
  vector[N] y; //
  real xpred; // input location for prediction
  real pmualpha; // prior mean for alpha
  real psalpha;  // prior std for alpha
  real pmubeta;  // prior mean for beta
  real psbeta;   // prior std for beta
}
parameters {
  real alpha;
  real beta;
  real<lower=0> sigma;
}
transformed parameters {
  vector[N] mu;
  mu = alpha + beta*x;
}
model {
  alpha ~ normal(pmualpha, psalpha);
  beta ~ normal(pmubeta, psbeta);
  y ~ normal(mu, sigma);
}
generated quantities {
  real ypred;
  vector[N] log_lik;
  ypred = normal_rng(alpha + beta*xpred, sigma);
  for (i in 1:N)
    log_lik[i] = normal_lpdf(y[i] | mu[i], sigma);
}

Create a list with data and priors:


In [94]:
data = dict(
    N = N,
    x = x,
    y = y,
    xpred = xpred,
    pmualpha = y.mean(),     # centered
    psalpha  = 100,          # weakly informative prior
    pmubeta  = 0,            # a priori increase and decrese as likely
    psbeta   = (.1--.1)/6.0  # avg temp probably does not increase more than 1
                             # degree per 10 years
)

Run Stan


In [95]:
model = stan_utility.compile_model('lin.stan')
fit_lin = model.sampling(data=data, seed=194838)
samples = fit_lin.extract(permuted=True)


Using cached StanModel
WARNING:pystan:178 of 4000 iterations saturated the maximum tree depth of 10 (4.45%)
WARNING:pystan:Run again with max_treedepth larger than 10 to avoid saturation

Check the n_eff and Rhat


In [96]:
print(fit_lin)


Inference for Stan model: anon_model_6d95d5cbf9b9fb7d99a793b066a7bb72.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

              mean se_mean     sd   2.5%    25%    50%    75%  97.5%  n_eff   Rhat
alpha       -28.09    0.47  16.09 -59.57  -39.0 -27.89  -17.1   3.04   1172    1.0
beta          0.02  2.4e-4 8.1e-3 3.2e-3   0.01   0.02   0.02   0.03   1172    1.0
sigma         1.13  3.0e-3   0.11   0.94   1.05   1.13    1.2   1.38   1452    1.0
mu[1]         8.74  7.7e-3   0.29   8.17   8.54   8.74   8.93    9.3   1417    1.0
mu[2]         8.75  7.4e-3   0.28   8.21   8.56   8.76   8.95    9.3   1432    1.0
mu[3]         8.77  7.2e-3   0.27   8.24   8.58   8.77   8.96   9.31   1448    1.0
mu[4]         8.79  7.0e-3   0.27   8.27   8.61   8.79   8.97   9.31   1466    1.0
mu[5]         8.81  6.8e-3   0.26   8.31   8.63   8.81   8.99   9.32   1485    1.0
mu[6]         8.83  6.6e-3   0.25   8.34   8.66   8.83    9.0   9.32   1509    1.0
mu[7]         8.85  6.3e-3   0.25   8.37   8.68   8.85   9.02   9.32   1533    1.0
mu[8]         8.87  6.1e-3   0.24    8.4    8.7   8.87   9.03   9.33   1559    1.0
mu[9]         8.89  5.9e-3   0.24   8.43   8.73   8.89   9.05   9.34   1588    1.0
mu[10]        8.91  5.7e-3   0.23   8.46   8.75   8.91   9.06   9.35   1620    1.0
mu[11]        8.92  5.5e-3   0.22   8.49   8.77   8.93   9.07   9.36   1655    1.0
mu[12]        8.94  5.3e-3   0.22   8.52    8.8   8.94   9.09   9.36   1695    1.0
mu[13]        8.96  5.1e-3   0.21   8.56   8.82   8.96    9.1   9.37   1740    1.0
mu[14]        8.98  4.8e-3    0.2   8.58   8.84   8.98   9.12   9.38   1790    1.0
mu[15]         9.0  4.6e-3    0.2   8.61   8.86    9.0   9.13   9.39   1846    1.0
mu[16]        9.02  4.4e-3   0.19   8.64   8.89   9.02   9.15    9.4   1910    1.0
mu[17]        9.04  4.2e-3   0.19   8.67   8.91   9.04   9.16   9.41   1986    1.0
mu[18]        9.06  4.0e-3   0.18    8.7   8.93   9.06   9.18   9.42   2076    1.0
mu[19]        9.07  3.8e-3   0.18   8.73   8.95   9.08    9.2   9.43   2178    1.0
mu[20]        9.09  3.6e-3   0.17   8.75   8.98   9.09   9.21   9.44   2294    1.0
mu[21]        9.11  3.5e-3   0.17   8.78    9.0   9.11   9.23   9.45   2425    1.0
mu[22]        9.13  3.3e-3   0.17    8.8   9.02   9.13   9.24   9.46   2573    1.0
mu[23]        9.15  3.1e-3   0.16   8.83   9.04   9.15   9.26   9.47   2739    1.0
mu[24]        9.17  2.9e-3   0.16   8.85   9.06   9.17   9.28   9.48   2921    1.0
mu[25]        9.19  2.8e-3   0.16   8.88   9.08   9.19   9.29   9.49   3123    1.0
mu[26]        9.21  2.7e-3   0.15    8.9    9.1   9.21   9.31   9.51   3296    1.0
mu[27]        9.23  2.6e-3   0.15   8.92   9.12   9.23   9.33   9.52   3462    1.0
mu[28]        9.24  2.5e-3   0.15   8.95   9.14   9.24   9.34   9.54   3617    1.0
mu[29]        9.26  2.4e-3   0.15   8.97   9.16   9.26   9.36   9.55   3753    1.0
mu[30]        9.28  2.4e-3   0.15    9.0   9.18   9.28   9.38   9.57   3856    1.0
mu[31]         9.3  2.3e-3   0.15   9.01    9.2    9.3    9.4   9.59   3919    1.0
mu[32]        9.32  2.3e-3   0.15   9.03   9.22   9.32   9.42   9.61   3934    1.0
mu[33]        9.34  2.3e-3   0.15   9.05   9.24   9.34   9.44   9.63   3900    1.0
mu[34]        9.36  2.4e-3   0.15   9.07   9.26   9.36   9.46   9.65   3820    1.0
mu[35]        9.38  2.4e-3   0.15   9.08   9.28   9.37   9.48   9.67   3704    1.0
mu[36]         9.4  2.5e-3   0.15    9.1    9.3   9.39    9.5   9.69   3560    1.0
mu[37]        9.41  2.6e-3   0.15   9.11   9.32   9.41   9.52   9.71   3377    1.0
mu[38]        9.43  2.7e-3   0.16   9.12   9.33   9.43   9.54   9.74   3189    1.0
mu[39]        9.45  2.9e-3   0.16   9.13   9.35   9.45   9.56   9.76   3006    1.0
mu[40]        9.47  3.0e-3   0.16   9.15   9.37   9.47   9.58   9.79   2834    1.0
mu[41]        9.49  3.2e-3   0.16   9.16   9.38   9.49    9.6   9.81   2675    1.0
mu[42]        9.51  3.4e-3   0.17   9.17    9.4   9.51   9.62   9.84   2530    1.0
mu[43]        9.53  3.5e-3   0.17   9.18   9.42   9.53   9.64   9.86   2400    1.0
mu[44]        9.55  3.7e-3   0.18   9.19   9.43   9.55   9.66   9.89   2284    1.0
mu[45]        9.57  3.9e-3   0.18    9.2   9.45   9.57   9.68   9.92   2181    1.0
mu[46]        9.58  4.1e-3   0.19   9.21   9.46   9.58    9.7   9.95   2089    1.0
mu[47]         9.6  4.3e-3   0.19   9.22   9.48    9.6   9.73   9.97   2005    1.0
mu[48]        9.62  4.5e-3    0.2   9.23   9.49   9.62   9.75  10.01   1932    1.0
mu[49]        9.64  4.7e-3    0.2   9.24   9.51   9.64   9.77  10.04   1869    1.0
mu[50]        9.66  4.9e-3   0.21   9.25   9.52   9.66    9.8  10.07   1811    1.0
mu[51]        9.68  5.1e-3   0.21   9.25   9.54   9.68   9.82   10.1   1760    1.0
mu[52]         9.7  5.3e-3   0.22   9.26   9.55    9.7   9.84  10.13   1714    1.0
mu[53]        9.72  5.5e-3   0.23   9.27   9.57   9.72   9.87  10.16   1673    1.0
mu[54]        9.74  5.8e-3   0.23   9.28   9.58   9.74   9.89  10.19   1636    1.0
mu[55]        9.75  6.0e-3   0.24   9.29    9.6   9.75   9.91  10.22   1603    1.0
mu[56]        9.77  6.2e-3   0.25    9.3   9.61   9.77   9.94  10.25   1573    1.0
mu[57]        9.79  6.4e-3   0.25    9.3   9.62   9.79   9.96  10.28   1546    1.0
mu[58]        9.81  6.6e-3   0.26   9.31   9.64   9.81   9.98  10.31   1521    1.0
mu[59]        9.83  6.9e-3   0.27   9.31   9.65   9.83  10.01  10.35   1499    1.0
mu[60]        9.85  7.1e-3   0.27   9.32   9.66   9.85  10.03  10.38   1479    1.0
mu[61]        9.87  7.3e-3   0.28   9.33   9.68   9.87  10.05  10.41   1460    1.0
mu[62]        9.89  7.5e-3   0.29   9.33   9.69   9.89  10.08  10.45   1444    1.0
ypred         9.96    0.02   1.17   7.64   9.18   9.96  10.73  12.25   3522    1.0
log_lik[1]   -1.15  3.7e-3   0.13  -1.45  -1.23  -1.14  -1.05  -0.92   1334    1.0
log_lik[2]   -2.91    0.01   0.54  -4.13  -3.24  -2.85  -2.52   -2.0   1447    1.0
log_lik[3]   -1.23  4.1e-3   0.16  -1.58  -1.32  -1.21  -1.11  -0.96   1489    1.0
log_lik[4]   -1.26  4.3e-3   0.16  -1.62  -1.36  -1.25  -1.14  -0.99   1415    1.0
log_lik[5]   -1.27  4.3e-3   0.16  -1.63  -1.37  -1.26  -1.15   -1.0   1437    1.0
log_lik[6]   -1.58  6.0e-3   0.23  -2.09  -1.72  -1.55  -1.41  -1.19   1522    1.0
log_lik[7]   -1.09  2.9e-3   0.11  -1.32  -1.16  -1.09  -1.01  -0.89   1412    1.0
log_lik[8]   -1.08  2.9e-3   0.11   -1.3  -1.15  -1.08  -1.01  -0.89   1451    1.0
log_lik[9]   -2.85    0.01   0.46  -3.89  -3.13   -2.8  -2.51  -2.06   1547    1.0
log_lik[10]  -1.63  5.4e-3   0.22  -2.13  -1.77  -1.61  -1.47  -1.27   1640    1.0
log_lik[11]  -1.76  5.9e-3   0.24  -2.29  -1.91  -1.74  -1.59  -1.36   1690    1.0
log_lik[12]  -1.07  2.7e-3    0.1  -1.27  -1.13  -1.06   -1.0  -0.88   1444    1.0
log_lik[13]  -1.23  3.2e-3   0.13  -1.51  -1.31  -1.22  -1.14  -1.01   1628    1.0
log_lik[14]  -2.33  8.0e-3   0.33  -3.06  -2.53  -2.29  -2.09  -1.79   1736    1.0
log_lik[15]  -1.09  2.7e-3   0.11  -1.31  -1.16  -1.09  -1.02   -0.9   1501    1.0
log_lik[16]  -1.07  2.6e-3    0.1  -1.28  -1.14  -1.07   -1.0  -0.89   1487    1.0
log_lik[17]  -1.88  5.1e-3   0.23  -2.37  -2.02  -1.86  -1.72   -1.5   1968    1.0
log_lik[18]  -1.89  4.9e-3   0.22  -2.37  -2.02  -1.87  -1.73  -1.51   1964    1.0
log_lik[19]  -2.53  7.6e-3   0.33  -3.25  -2.73   -2.5   -2.3  -1.97   1864    1.0
log_lik[20]  -1.07  2.6e-3    0.1  -1.27  -1.13  -1.07   -1.0  -0.89   1491    1.0
log_lik[21]  -2.96  9.2e-3    0.4   -3.8  -3.21  -2.93  -2.68  -2.28   1858    1.0
log_lik[22]  -1.35  2.6e-3   0.12  -1.61  -1.43  -1.35  -1.27  -1.13   2279    1.0
log_lik[23]  -1.41  2.5e-3   0.13  -1.68  -1.49  -1.41  -1.32  -1.18   2503    1.0
log_lik[24]  -4.12    0.01   0.62  -5.47   -4.5  -4.05  -3.68  -3.06   1712    1.0
log_lik[25]  -1.44  2.3e-3   0.12  -1.71  -1.52  -1.43  -1.36  -1.22   3027    1.0
log_lik[26]  -1.31  2.1e-3   0.11  -1.54  -1.38  -1.31  -1.24  -1.11   2745    1.0
log_lik[27]  -1.08  2.6e-3    0.1  -1.29  -1.14  -1.08  -1.01  -0.89   1510    1.0
log_lik[28]  -1.22  2.3e-3    0.1  -1.43  -1.29  -1.22  -1.15  -1.02   2079    1.0
log_lik[29]  -1.76  2.7e-3   0.16   -2.1  -1.86  -1.75  -1.65  -1.48   3497    1.0
log_lik[30]  -2.18  4.7e-3   0.24   -2.7  -2.33  -2.16  -2.01  -1.78   2529    1.0
log_lik[31]  -2.07  4.1e-3   0.22  -2.56  -2.21  -2.06  -1.92  -1.71   2702    1.0
log_lik[32]  -1.64  2.3e-3   0.14  -1.95  -1.73  -1.63  -1.54  -1.39   3754    1.0
log_lik[33]   -1.4  1.9e-3   0.11  -1.64  -1.47  -1.39  -1.32   -1.2   3612    1.0
log_lik[34]   -1.1  2.5e-3    0.1   -1.3  -1.16  -1.09  -1.03  -0.91   1559    1.0
log_lik[35]  -1.05  2.6e-3    0.1  -1.26  -1.12  -1.05  -0.99  -0.87   1442    1.0
log_lik[36]  -2.81  7.9e-3   0.35  -3.61  -3.02  -2.77  -2.56   -2.2   2010    1.0
log_lik[37]  -1.36  2.2e-3   0.11   -1.6  -1.43  -1.36  -1.28  -1.15   2709    1.0
log_lik[38]  -1.06  2.6e-3    0.1  -1.26  -1.12  -1.06  -0.99  -0.88   1450    1.0
log_lik[39]  -1.34  2.3e-3   0.12  -1.58  -1.41  -1.33  -1.26  -1.12   2430    1.0
log_lik[40]  -1.09  2.6e-3    0.1   -1.3  -1.16  -1.09  -1.02  -0.91   1529    1.0
log_lik[41]  -1.15  2.4e-3    0.1  -1.35  -1.21  -1.14  -1.08  -0.96   1792    1.0
log_lik[42]  -1.12  2.5e-3    0.1  -1.32  -1.19  -1.11  -1.05  -0.93   1667    1.0
log_lik[43]  -1.05  2.6e-3    0.1  -1.26  -1.11  -1.05  -0.98  -0.87   1427    1.0
log_lik[44]  -1.34  2.7e-3   0.13  -1.61  -1.42  -1.33  -1.25  -1.11   2177    1.0
log_lik[45]   -1.1  2.7e-3    0.1  -1.31  -1.16  -1.09  -1.03   -0.9   1505    1.0
log_lik[46]  -1.39  3.2e-3   0.14  -1.69  -1.47  -1.38  -1.29  -1.14   1976    1.0
log_lik[47]  -1.07  2.6e-3    0.1  -1.28  -1.14  -1.07   -1.0  -0.88   1471    1.0
log_lik[48]  -1.21  2.8e-3   0.12  -1.46  -1.29   -1.2  -1.13   -1.0   1754    1.0
log_lik[49]  -1.22  2.9e-3   0.12  -1.48   -1.3  -1.21  -1.13   -1.0   1733    1.0
log_lik[50]  -1.06  2.7e-3    0.1  -1.27  -1.12  -1.06  -0.99  -0.87   1404    1.0
log_lik[51]  -2.24  7.6e-3   0.32  -2.94  -2.44  -2.22  -2.01   -1.7   1750    1.0
log_lik[52]  -1.46  4.3e-3   0.18  -1.85  -1.56  -1.44  -1.34  -1.16   1699    1.0
log_lik[53]  -1.12  3.0e-3   0.11  -1.36  -1.19  -1.11  -1.04  -0.91   1457    1.0
log_lik[54]  -1.51  4.9e-3    0.2  -1.96  -1.63   -1.5  -1.38  -1.18   1636    1.0
log_lik[55]  -1.23  3.7e-3   0.14  -1.55  -1.32  -1.22  -1.13  -0.98   1508    1.0
log_lik[56]  -1.17  3.5e-3   0.13  -1.46  -1.25  -1.16  -1.08  -0.94   1456    1.0
log_lik[57]  -1.46  5.1e-3    0.2  -1.92  -1.58  -1.44  -1.32  -1.13   1554    1.0
log_lik[58]  -1.07  2.8e-3    0.1  -1.29  -1.13  -1.06  -0.99  -0.88   1366    1.0
log_lik[59]  -1.49  5.7e-3   0.22   -2.0  -1.62  -1.47  -1.34  -1.13   1507    1.0
log_lik[60]  -1.43  5.5e-3   0.21  -1.92  -1.55  -1.41  -1.28  -1.09   1468    1.0
log_lik[61]  -1.71  7.4e-3   0.28  -2.36  -1.88  -1.68  -1.51  -1.25   1467    1.0
log_lik[62]  -1.66  7.3e-3   0.28  -2.29  -1.82  -1.63  -1.46  -1.21   1460    1.0
lp__        -38.17    0.05   1.35 -41.73 -38.77 -37.82 -37.19 -36.64    902   1.01

Samples were drawn using NUTS at Wed Dec 19 11:19:39 2018.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

Check the treedepth, E-BFMI, and divergences


In [97]:
stan_utility.check_treedepth(fit_lin)
stan_utility.check_energy(fit_lin)
stan_utility.check_div(fit_lin)


178 of 4000 iterations saturated the maximum tree depth of 10 (4.45%)
Run again with max_depth set to a larger value to avoid saturation
0.0 of 4000 iterations ended with a divergence (0.0%)

We get a warning that several iterations saturated the maximum treedepth. The reason for this is a very strong posterior correlation between alpha and beta as shown in the next plot. This doesn't invalidate the results, but leads to suboptimal performance. We'll later look at alternative which reduces the posterior correlation. The following figure shows the strong posterior correlation in the current case.


In [98]:
samples = fit_lin.extract(permuted=True)
# preview 500 posterior samples of (alpha, beta)
plt.scatter(samples['alpha'][:500], samples['beta'][:500], 50, marker='.', alpha=0.25)
plt.xlabel('alpha')
plt.ylabel('beta');


Compute the probability that the summer temperature is increasing. The beta-parameter represents the slope of the temperature change. Hence, by checking the probability that beta > 0, we get the probability that the temperature is rising.


In [99]:
print('Pr(beta > 0) = {}'.format(np.mean(samples['beta'] > 0)))


Pr(beta > 0) = 0.9895

Plot the data, the model fit and prediction for year 2016.


In [100]:
# make slightly wider figure of 3 plots
figsize = plt.rcParams['figure.figsize'].copy()
figsize[0] *= 2.2  # width
figsize[1] *= 1.2  # width
fig, ax = plt.subplots(1, 1, figsize=figsize)

# plot 1: scatterplot and lines
color_scatter = 'C0'  # 'C0' for default color #0
color_line = 'C3'     # 'C1' for default color #1

ax.fill_between(
    x,
    np.percentile(samples['mu'], 5, axis=0),
    np.percentile(samples['mu'], 95, axis=0),
    # lighten color_line
    color=1 - 0.4*(1 - np.array(mpl.colors.to_rgb(color_line)))
)
ax.plot(
    x,
    np.percentile(samples['mu'], 50, axis=0),
    color=color_line,
    linewidth=2,
    alpha=0.8
)
ax.scatter(x, y, 10, color=color_scatter)
ax.set_xlabel('year')
ax.set_ylabel('summer temp. @Kilpisjarvi')
ax.set_xlim((1951.5, 2013.5))


az.plot_posterior(fit_lin, var_names=['beta','sigma','ypred'], credible_interval=0.95, round_to=2, kind='hist', bins=35)
plt.xlabel('y-prediction for x={}'.format(xpred));


For the beta parameter (the slope), let's also plot the relative density with reference value of 0. Now we can easily evaluate the probability for slope being positive.


In [101]:
az.plot_posterior(fit_lin, var_names=['beta'], credible_interval=0.95, round_to=2, ref_val=0, kind='kde')
plt.xlabel('beta')
plt.ylabel('Relative density');


Gaussian linear model with standardized data

In the above we used the unnormalized data and as x values are far away from zero, this will cause very strong posterior dependency between alpha and beta. The strong posterior dependency can be removed by normalizing the data to have zero mean. The following Stan code makes it in Stan. In generated quantities we do correspnding transformation back to the original scale.


In [102]:
with open('lin_std.stan') as file:
    print(file.read())


// Gaussian linear model with standardized data
data {
  int<lower=0> N; // number of data points
  vector[N] x; //
  vector[N] y; //
  real xpred; // input location for prediction
}
transformed data {
  vector[N] x_std;
  vector[N] y_std;
  real xpred_std;
  x_std = (x - mean(x)) / sd(x);
  y_std = (y - mean(y)) / sd(y);
  xpred_std = (xpred - mean(x)) / sd(x);
}
parameters {
  real alpha;
  real beta;
  real<lower=0> sigma_std;
}
transformed parameters {
  vector[N] mu_std;
  mu_std = alpha + beta*x_std;
}
model {
  alpha ~ normal(0, 1);
  beta ~ normal(0, 1);
  y_std ~ normal(mu_std, sigma_std);
}
generated quantities {
  vector[N] mu;
  real<lower=0> sigma;
  real ypred;
  vector[N] log_lik;
  mu = mu_std*sd(y) + mean(y);
  sigma = sigma_std*sd(y);
  ypred = normal_rng((alpha + beta*xpred_std)*sd(y)+mean(y), sigma*sd(y));
  for (i in 1:N)
    log_lik[i] = normal_lpdf(y[i] | mu[i], sigma);
}

Run Stan


In [103]:
model = stan_utility.compile_model('lin_std.stan')
fit_lin_std = model.sampling(data=data, seed=194838)
samples = fit_lin_std.extract(permuted=True)


Using cached StanModel

Check the n_eff and Rhat


In [104]:
print(fit_lin_std)


Inference for Stan model: anon_model_161eb30961f09398a0825482beb4b070.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

              mean se_mean     sd   2.5%    25%    50%    75%  97.5%  n_eff   Rhat
alpha       6.7e-4  2.0e-3   0.12  -0.25  -0.08 1.2e-3   0.08   0.26   3733    1.0
beta          0.31  1.9e-3   0.12   0.06   0.23   0.31   0.39   0.55   3988    1.0
sigma_std     0.98  1.6e-3   0.09   0.82   0.91   0.97   1.03   1.18   3457    1.0
mu_std[1]    -0.52  3.7e-3   0.24  -0.99  -0.68  -0.53  -0.37  -0.04   4221    1.0
mu_std[2]    -0.51  3.6e-3   0.23  -0.96  -0.66  -0.51  -0.36  -0.03   4222    1.0
mu_std[3]    -0.49  3.5e-3   0.23  -0.93  -0.64  -0.49  -0.34  -0.03   4220    1.0
mu_std[4]    -0.47  3.4e-3   0.22  -0.91  -0.62  -0.48  -0.33  -0.02   4216    1.0
mu_std[5]    -0.45  3.3e-3   0.22  -0.88   -0.6  -0.46  -0.32  -0.02   4213    1.0
mu_std[6]    -0.44  3.2e-3   0.21  -0.85  -0.58  -0.44   -0.3-8.6e-3   4209    1.0
mu_std[7]    -0.42  3.2e-3   0.21  -0.82  -0.56  -0.42  -0.29-2.6e-3   4204    1.0
mu_std[8]     -0.4  3.1e-3    0.2   -0.8  -0.53  -0.41  -0.28 1.5e-3   4199    1.0
mu_std[9]    -0.39  3.0e-3   0.19  -0.77  -0.51  -0.39  -0.26 7.8e-3   4193    1.0
mu_std[10]   -0.37  2.9e-3   0.19  -0.74  -0.49  -0.37  -0.25   0.01   4186    1.0
mu_std[11]   -0.35  2.8e-3   0.18  -0.71  -0.47  -0.35  -0.23   0.02   4178    1.0
mu_std[12]   -0.33  2.8e-3   0.18  -0.69  -0.45  -0.34  -0.22   0.03   4170    1.0
mu_std[13]   -0.32  2.7e-3   0.17  -0.66  -0.43  -0.32   -0.2   0.04   4160    1.0
mu_std[14]    -0.3  2.6e-3   0.17  -0.64  -0.41   -0.3  -0.19   0.05   4150    1.0
mu_std[15]   -0.28  2.6e-3   0.17  -0.61  -0.39  -0.29  -0.18   0.05   4138    1.0
mu_std[16]   -0.27  2.5e-3   0.16  -0.58  -0.37  -0.27  -0.16   0.06   4125    1.0
mu_std[17]   -0.25  2.4e-3   0.16  -0.56  -0.35  -0.25  -0.15   0.07   4111    1.0
mu_std[18]   -0.23  2.4e-3   0.15  -0.53  -0.33  -0.23  -0.13   0.08   4096    1.0
mu_std[19]   -0.21  2.3e-3   0.15  -0.51  -0.31  -0.22  -0.12   0.08   4080    1.0
mu_std[20]    -0.2  2.3e-3   0.14  -0.48  -0.29   -0.2   -0.1   0.09   4062    1.0
mu_std[21]   -0.18  2.2e-3   0.14  -0.46  -0.27  -0.18  -0.09    0.1   4043    1.0
mu_std[22]   -0.16  2.2e-3   0.14  -0.43  -0.25  -0.16  -0.07   0.11   4023    1.0
mu_std[23]   -0.15  2.1e-3   0.14  -0.41  -0.23  -0.15  -0.06   0.13   4002    1.0
mu_std[24]   -0.13  2.1e-3   0.13  -0.39  -0.21  -0.13  -0.04   0.14   3980    1.0
mu_std[25]   -0.11  2.1e-3   0.13  -0.37   -0.2  -0.11  -0.03   0.15   3951    1.0
mu_std[26]   -0.09  2.1e-3   0.13  -0.35  -0.18   -0.1-9.4e-3   0.16   3918    1.0
mu_std[27]   -0.08  2.0e-3   0.13  -0.33  -0.16  -0.08 6.8e-3   0.18   3884    1.0
mu_std[28]   -0.06  2.0e-3   0.13  -0.31  -0.14  -0.06   0.02   0.19   3849    1.0
mu_std[29]   -0.04  2.0e-3   0.12  -0.29  -0.12  -0.04   0.04   0.21   3815    1.0
mu_std[30]   -0.03  2.0e-3   0.12  -0.27  -0.11  -0.03   0.06   0.23   3781    1.0
mu_std[31] -7.9e-3  2.0e-3   0.12  -0.26  -0.09-7.5e-3   0.07   0.25   3749    1.0
mu_std[32]  9.3e-3  2.0e-3   0.12  -0.24  -0.07 9.9e-3   0.09   0.26   3718    1.0
mu_std[33]    0.03  2.1e-3   0.13  -0.22  -0.06   0.03   0.11   0.28   3690    1.0
mu_std[34]    0.04  2.1e-3   0.13  -0.21  -0.04   0.05   0.13    0.3   3665    1.0
mu_std[35]    0.06  2.1e-3   0.13  -0.19  -0.02   0.06   0.15   0.32   3643    1.0
mu_std[36]    0.08  2.1e-3   0.13  -0.18-8.8e-3   0.08   0.16   0.33   3623    1.0
mu_std[37]     0.1  2.2e-3   0.13  -0.17 5.9e-3    0.1   0.18   0.36   3607    1.0
mu_std[38]    0.11  2.2e-3   0.13  -0.15   0.02   0.11    0.2   0.38   3593    1.0
mu_std[39]    0.13  2.3e-3   0.14  -0.14   0.04   0.13   0.22    0.4   3583    1.0
mu_std[40]    0.15  2.3e-3   0.14  -0.13   0.05   0.15   0.24   0.43   3575    1.0
mu_std[41]    0.16  2.4e-3   0.14  -0.12   0.07   0.16   0.26   0.45   3565    1.0
mu_std[42]    0.18  2.4e-3   0.15   -0.1   0.08   0.18   0.28   0.47   3558    1.0
mu_std[43]     0.2  2.5e-3   0.15   -0.1    0.1    0.2    0.3    0.5   3553    1.0
mu_std[44]    0.22  2.6e-3   0.15  -0.09   0.11   0.22   0.32   0.52   3550    1.0
mu_std[45]    0.23  2.6e-3   0.16  -0.08   0.12   0.23   0.34   0.55   3549    1.0
mu_std[46]    0.25  2.7e-3   0.16  -0.07   0.14   0.25   0.36   0.57   3550    1.0
mu_std[47]    0.27  2.8e-3   0.17  -0.06   0.16   0.27   0.38    0.6   3552    1.0
mu_std[48]    0.28  2.9e-3   0.17  -0.05   0.17   0.28    0.4   0.62   3555    1.0
mu_std[49]     0.3  2.9e-3   0.17  -0.04   0.18    0.3   0.42   0.65   3559    1.0
mu_std[50]    0.32  3.0e-3   0.18  -0.03    0.2   0.32   0.44   0.67   3563    1.0
mu_std[51]    0.34  3.1e-3   0.18  -0.03   0.21   0.33   0.46    0.7   3569    1.0
mu_std[52]    0.35  3.2e-3   0.19  -0.02   0.23   0.35   0.48   0.73   3575    1.0
mu_std[53]    0.37  3.3e-3    0.2  -0.02   0.24   0.37    0.5   0.75   3581    1.0
mu_std[54]    0.39  3.3e-3    0.2  -0.01   0.25   0.39   0.52   0.78   3587    1.0
mu_std[55]     0.4  3.4e-3   0.21-6.5e-3   0.27    0.4   0.54   0.81   3594    1.0
mu_std[56]    0.42  3.5e-3   0.21-3.5e-4   0.28   0.42   0.56   0.84   3601    1.0
mu_std[57]    0.44  3.6e-3   0.22 9.2e-3   0.29   0.44   0.58   0.87   3608    1.0
mu_std[58]    0.46  3.7e-3   0.22   0.02   0.31   0.45   0.61   0.89   3615    1.0
mu_std[59]    0.47  3.8e-3   0.23   0.02   0.32   0.47   0.63   0.92   3622    1.0
mu_std[60]    0.49  3.9e-3   0.23   0.03   0.34   0.49   0.65   0.95   3628    1.0
mu_std[61]    0.51  4.0e-3   0.24   0.03   0.35   0.51   0.67   0.98   3635    1.0
mu_std[62]    0.53  4.1e-3   0.25   0.04   0.36   0.52   0.69   1.01   3642    1.0
mu[1]         8.71  4.3e-3   0.28   8.17   8.53    8.7   8.88   9.27   4221    1.0
mu[2]         8.73  4.2e-3   0.27    8.2   8.55   8.72    8.9   9.27   4222    1.0
mu[3]         8.75  4.1e-3   0.26   8.23   8.57   8.74   8.91   9.28   4220    1.0
mu[4]         8.77  4.0e-3   0.26   8.26    8.6   8.76   8.93   9.29   4216    1.0
mu[5]         8.79  3.9e-3   0.25   8.29   8.62   8.78   8.95   9.29   4213    1.0
mu[6]         8.81  3.8e-3   0.24   8.33   8.64    8.8   8.96    9.3   4209    1.0
mu[7]         8.83  3.7e-3   0.24   8.36   8.67   8.82   8.98   9.31   4204    1.0
mu[8]         8.85  3.6e-3   0.23   8.39   8.69   8.84   8.99   9.31   4199    1.0
mu[9]         8.87  3.5e-3   0.23   8.42   8.72   8.86   9.01   9.32   4193    1.0
mu[10]        8.89  3.4e-3   0.22   8.45   8.74   8.88   9.03   9.33   4186    1.0
mu[11]        8.91  3.3e-3   0.21   8.49   8.77    8.9   9.04   9.34   4178    1.0
mu[12]        8.92  3.2e-3   0.21   8.51   8.79   8.92   9.06   9.35   4170    1.0
mu[13]        8.94  3.1e-3    0.2   8.54   8.81   8.94   9.08   9.36   4160    1.0
mu[14]        8.96  3.1e-3    0.2   8.58   8.84   8.96   9.09   9.37   4150    1.0
mu[15]        8.98  3.0e-3   0.19   8.61   8.86   8.98   9.11   9.37   4138    1.0
mu[16]         9.0  2.9e-3   0.19   8.64   8.88    9.0   9.12   9.38   4125    1.0
mu[17]        9.02  2.8e-3   0.18   8.66    8.9   9.02   9.14   9.39   4111    1.0
mu[18]        9.04  2.8e-3   0.18   8.69   8.93   9.04   9.16    9.4   4096    1.0
mu[19]        9.06  2.7e-3   0.17   8.72   8.95   9.06   9.18   9.41   4080    1.0
mu[20]        9.08  2.6e-3   0.17   8.75   8.97   9.08   9.19   9.42   4062    1.0
mu[21]         9.1  2.6e-3   0.16   8.78    9.0    9.1   9.21   9.43   4043    1.0
mu[22]        9.12  2.5e-3   0.16   8.81   9.02   9.12   9.23   9.45   4023    1.0
mu[23]        9.14  2.5e-3   0.16   8.83   9.04   9.14   9.25   9.46   4002    1.0
mu[24]        9.16  2.4e-3   0.15   8.86   9.06   9.16   9.27   9.47   3980    1.0
mu[25]        9.18  2.4e-3   0.15   8.88   9.09   9.18   9.28   9.49   3951    1.0
mu[26]         9.2  2.4e-3   0.15   8.91   9.11    9.2    9.3    9.5   3918    1.0
mu[27]        9.22  2.4e-3   0.15   8.93   9.13   9.22   9.32   9.52   3884    1.0
mu[28]        9.24  2.4e-3   0.15   8.95   9.15   9.24   9.34   9.54   3849    1.0
mu[29]        9.26  2.3e-3   0.14   8.97   9.17   9.26   9.36   9.55   3815    1.0
mu[30]        9.28  2.3e-3   0.14    9.0   9.19   9.28   9.38   9.57   3781    1.0
mu[31]         9.3  2.4e-3   0.14   9.02   9.21    9.3    9.4    9.6   3749    1.0
mu[32]        9.32  2.4e-3   0.14   9.03   9.23   9.32   9.42   9.62   3718    1.0
mu[33]        9.34  2.4e-3   0.14   9.05   9.25   9.35   9.44   9.64   3690    1.0
mu[34]        9.36  2.4e-3   0.15   9.07   9.27   9.37   9.46   9.66   3665    1.0
mu[35]        9.38  2.4e-3   0.15   9.09   9.28   9.39   9.48   9.68   3643    1.0
mu[36]         9.4  2.5e-3   0.15    9.1    9.3    9.4    9.5    9.7   3623    1.0
mu[37]        9.42  2.5e-3   0.15   9.12   9.32   9.43   9.52   9.73   3607    1.0
mu[38]        9.44  2.6e-3   0.15   9.13   9.34   9.45   9.55   9.75   3593    1.0
mu[39]        9.46  2.6e-3   0.16   9.15   9.35   9.47   9.57   9.78   3583    1.0
mu[40]        9.48  2.7e-3   0.16   9.16   9.37   9.48   9.59   9.81   3575    1.0
mu[41]         9.5  2.8e-3   0.16   9.18   9.39    9.5   9.62   9.83   3565    1.0
mu[42]        9.52  2.8e-3   0.17   9.19   9.41   9.52   9.64   9.86   3558    1.0
mu[43]        9.54  2.9e-3   0.17    9.2   9.42   9.54   9.66   9.89   3553    1.0
mu[44]        9.56  3.0e-3   0.18   9.21   9.44   9.56   9.68   9.92   3550    1.0
mu[45]        9.58  3.1e-3   0.18   9.22   9.46   9.58   9.71   9.95   3549    1.0
mu[46]         9.6  3.1e-3   0.19   9.23   9.48    9.6   9.73   9.97   3550    1.0
mu[47]        9.62  3.2e-3   0.19   9.24   9.49   9.62   9.75   10.0   3552    1.0
mu[48]        9.64  3.3e-3    0.2   9.26   9.51   9.64   9.78  10.03   3555    1.0
mu[49]        9.66  3.4e-3    0.2   9.27   9.53   9.66    9.8  10.06   3559    1.0
mu[50]        9.68  3.5e-3   0.21   9.28   9.54   9.68   9.82  10.09   3563    1.0
mu[51]         9.7  3.6e-3   0.21   9.28   9.56    9.7   9.85  10.12   3569    1.0
mu[52]        9.72  3.7e-3   0.22   9.29   9.58   9.72   9.87  10.15   3575    1.0
mu[53]        9.74  3.8e-3   0.23   9.29   9.59   9.74   9.89  10.19   3581    1.0
mu[54]        9.76  3.9e-3   0.23    9.3   9.61   9.76   9.92  10.22   3587    1.0
mu[55]        9.78  4.0e-3   0.24   9.31   9.62   9.78   9.94  10.25   3594    1.0
mu[56]         9.8  4.1e-3   0.25   9.31   9.64    9.8   9.97  10.28   3601    1.0
mu[57]        9.82  4.2e-3   0.25   9.32   9.65   9.82   9.99  10.32   3608    1.0
mu[58]        9.84  4.3e-3   0.26   9.33   9.67   9.84  10.02  10.35   3615    1.0
mu[59]        9.86  4.4e-3   0.26   9.34   9.69   9.86  10.04  10.38   3622    1.0
mu[60]        9.88  4.5e-3   0.27   9.34    9.7   9.88  10.06  10.42   3628    1.0
mu[61]         9.9  4.6e-3   0.28   9.35   9.72    9.9  10.09  10.45   3635    1.0
mu[62]        9.92  4.7e-3   0.28   9.36   9.73   9.92  10.11  10.48   3642    1.0
sigma         1.13  1.8e-3   0.11   0.95   1.06   1.12    1.2   1.37   3457    1.0
ypred         10.0    0.02   1.35   7.37   9.12  10.01   10.9  12.67   4002    1.0
log_lik[1]   -1.13  2.3e-3   0.13  -1.43  -1.21  -1.12  -1.04  -0.92   3270    1.0
log_lik[2]   -2.96  8.4e-3   0.54  -4.15  -3.29  -2.93  -2.58  -2.04   4060    1.0
log_lik[3]   -1.24  2.4e-3   0.15  -1.58  -1.33  -1.22  -1.13  -0.99   4039    1.0
log_lik[4]   -1.24  2.5e-3   0.15   -1.6  -1.33  -1.22  -1.13  -0.99   3799    1.0
log_lik[5]   -1.25  2.5e-3   0.15  -1.61  -1.34  -1.23  -1.14   -1.0   3822    1.0
log_lik[6]   -1.55  3.3e-3   0.22  -2.06  -1.68  -1.52   -1.4  -1.19   4295    1.0
log_lik[7]   -1.08  1.9e-3    0.1  -1.31  -1.15  -1.07  -1.01  -0.89   3055    1.0
log_lik[8]   -1.09  1.8e-3    0.1  -1.29  -1.15  -1.08  -1.01  -0.89   3158    1.0
log_lik[9]   -2.89  7.2e-3   0.46  -3.87  -3.17  -2.86  -2.56  -2.09   4009    1.0
log_lik[10]  -1.65  3.2e-3   0.21  -2.13  -1.78  -1.63   -1.5  -1.28   4328    1.0
log_lik[11]  -1.74  3.4e-3   0.22  -2.24  -1.87  -1.71  -1.58  -1.36   4429    1.0
log_lik[12]  -1.06  1.8e-3    0.1  -1.26  -1.12  -1.06  -0.99  -0.88   3066    1.0
log_lik[13]  -1.22  2.0e-3   0.12  -1.49  -1.29  -1.21  -1.14   -1.0   3736    1.0
log_lik[14]   -2.3  4.5e-3    0.3  -2.97  -2.49  -2.27  -2.09  -1.78   4487    1.0
log_lik[15]  -1.09  1.7e-3    0.1   -1.3  -1.16  -1.09  -1.02  -0.91   3332    1.0
log_lik[16]  -1.07  1.7e-3    0.1  -1.27  -1.13  -1.06   -1.0  -0.89   3201    1.0
log_lik[17]  -1.86  3.2e-3   0.21  -2.33  -1.99  -1.84  -1.71   -1.5   4433    1.0
log_lik[18]   -1.9  3.3e-3   0.21  -2.37  -2.03  -1.88  -1.75  -1.52   4138    1.0
log_lik[19]  -2.55  5.2e-3   0.32  -3.27  -2.75  -2.53  -2.32  -1.99   3959    1.0
log_lik[20]  -1.06  1.7e-3    0.1  -1.26  -1.12  -1.06   -1.0  -0.88   3248    1.0
log_lik[21]  -2.98  6.3e-3   0.39  -3.83  -3.22  -2.95  -2.71  -2.29   3891    1.0
log_lik[22]  -1.36  1.8e-3   0.12  -1.61  -1.43  -1.35  -1.27  -1.15   4167    1.0
log_lik[23]  -1.41  1.9e-3   0.12  -1.68  -1.49  -1.41  -1.33   -1.2   4228    1.0
log_lik[24]  -4.11  9.2e-3   0.57  -5.37  -4.46  -4.06   -3.7  -3.11   3836    1.0
log_lik[25]  -1.43  1.9e-3   0.12   -1.7  -1.51  -1.43  -1.35  -1.21   3968    1.0
log_lik[26]  -1.31  1.7e-3   0.11  -1.53  -1.37   -1.3  -1.23  -1.11   3795    1.0
log_lik[27]  -1.08  1.6e-3   0.09  -1.27  -1.14  -1.08  -1.01   -0.9   3331    1.0
log_lik[28]  -1.22  1.6e-3    0.1  -1.42  -1.28  -1.22  -1.15  -1.04   3637    1.0
log_lik[29]  -1.76  2.5e-3   0.16  -2.11  -1.86  -1.75  -1.65  -1.48   4004    1.0
log_lik[30]  -2.18  3.6e-3   0.22  -2.67  -2.33  -2.16  -2.02  -1.79   3914    1.0
log_lik[31]  -2.08  3.3e-3   0.21  -2.53  -2.21  -2.06  -1.93  -1.72   3897    1.0
log_lik[32]  -1.65  2.2e-3   0.14  -1.94  -1.73  -1.64  -1.55  -1.39   3921    1.0
log_lik[33]   -1.4  1.8e-3   0.11  -1.64  -1.47   -1.4  -1.33   -1.2   3814    1.0
log_lik[34]  -1.09  1.6e-3   0.09  -1.29  -1.15  -1.09  -1.03  -0.92   3324    1.0
log_lik[35]  -1.05  1.6e-3   0.09  -1.25  -1.11  -1.05  -0.99  -0.88   3292    1.0
log_lik[36]  -2.82  5.6e-3   0.34  -3.55  -3.04  -2.79  -2.58  -2.23   3671    1.0
log_lik[37]  -1.36  1.9e-3   0.11  -1.59  -1.43  -1.35  -1.28  -1.16   3608    1.0
log_lik[38]  -1.06  1.6e-3   0.09  -1.25  -1.12  -1.05  -0.99  -0.88   3274    1.0
log_lik[39]  -1.33  1.9e-3   0.11  -1.57   -1.4  -1.32  -1.25  -1.13   3549    1.0
log_lik[40]  -1.09  1.7e-3    0.1  -1.29  -1.15  -1.09  -1.02  -0.91   3273    1.0
log_lik[41]  -1.15  1.7e-3    0.1  -1.36  -1.21  -1.15  -1.08  -0.97   3270    1.0
log_lik[42]  -1.12  1.7e-3    0.1  -1.33  -1.18  -1.12  -1.05  -0.94   3231    1.0
log_lik[43]  -1.05  1.7e-3   0.09  -1.24  -1.11  -1.05  -0.99  -0.87   3248    1.0
log_lik[44]  -1.35  2.1e-3   0.13  -1.63  -1.43  -1.34  -1.26  -1.13   3578    1.0
log_lik[45]  -1.09  1.7e-3    0.1   -1.3  -1.16  -1.09  -1.02  -0.91   3226    1.0
log_lik[46]  -1.37  2.3e-3   0.14  -1.67  -1.46  -1.37  -1.27  -1.13   3569    1.0
log_lik[47]  -1.07  1.7e-3    0.1  -1.27  -1.13  -1.07  -1.01  -0.89   3118    1.0
log_lik[48]  -1.22  2.0e-3   0.12  -1.48  -1.29  -1.21  -1.14  -1.01   3404    1.0
log_lik[49]  -1.23  2.1e-3   0.12   -1.5  -1.31  -1.22  -1.14  -1.02   3432    1.0
log_lik[50]  -1.06  1.7e-3    0.1  -1.25  -1.12  -1.05  -0.99  -0.87   3119    1.0
log_lik[51]  -2.21  4.9e-3   0.31  -2.86   -2.4  -2.18  -1.99  -1.68   3862    1.0
log_lik[52]  -1.44  2.9e-3   0.17  -1.82  -1.54  -1.42  -1.31  -1.14   3650    1.0
log_lik[53]  -1.11  1.9e-3   0.11  -1.34  -1.18   -1.1  -1.03  -0.91   3167    1.0
log_lik[54]  -1.49  3.2e-3   0.19  -1.91  -1.61  -1.47  -1.35  -1.16   3698    1.0
log_lik[55]  -1.21  2.4e-3   0.14  -1.52   -1.3   -1.2  -1.11  -0.98   3387    1.0
log_lik[56]  -1.16  2.2e-3   0.13  -1.44  -1.24  -1.15  -1.07  -0.94   3272    1.0
log_lik[57]  -1.48  3.4e-3   0.21  -1.95  -1.61  -1.46  -1.33  -1.15   3732    1.0
log_lik[58]  -1.06  1.9e-3    0.1  -1.28  -1.13  -1.06   -1.0  -0.88   2819    1.0
log_lik[59]  -1.52  3.7e-3   0.22  -2.03  -1.65  -1.49  -1.35  -1.16   3760    1.0
log_lik[60]   -1.4  3.3e-3    0.2  -1.87  -1.52  -1.38  -1.26  -1.07   3660    1.0
log_lik[61]  -1.75  4.7e-3   0.29  -2.42  -1.93  -1.71  -1.54  -1.28   3799    1.0
log_lik[62]  -1.62  4.3e-3   0.27  -2.23  -1.79   -1.6  -1.43  -1.19   3814    1.0
lp__         -28.8    0.03   1.24 -31.96  -29.4 -28.48 -27.88 -27.39   1878    1.0

Samples were drawn using NUTS at Wed Dec 19 11:19:44 2018.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

We get now much better effective sample sizes n_eff.

Check the treedepth, E-BFMI, and divergences


In [105]:
stan_utility.check_treedepth(fit_lin_std)
stan_utility.check_energy(fit_lin_std)
stan_utility.check_div(fit_lin_std)


0 of 4000 iterations saturated the maximum tree depth of 10 (0.0%)
0.0 of 4000 iterations ended with a divergence (0.0%)

Everything is fine now. The next figure shows that with the standardized data there is not much posterior correlation:


In [106]:
samples = fit_lin_std.extract(permuted=True)
plt.figure()
# preview 2000 samples
plt.scatter(samples['alpha'][:2000], samples['beta'][:2000], 50, marker='.', alpha=0.25)
plt.xlabel('alpha')
plt.ylabel('beta');


Compute the probability that the summer temperature is increasing.


In [107]:
print('Pr(beta > 0) = {}'.format(np.mean(samples['beta'] > 0)))


Pr(beta > 0) = 0.991

Plot the data, the model fit and prediction for year 2016.


In [108]:
# make slightly wider figure of 3 plots
figsize = plt.rcParams['figure.figsize'].copy()
figsize[0] *= 2.2  # width
figsize[1] *= 1.2  # width
fig, ax = plt.subplots(1, 1, figsize=figsize)

# plot 1: scatterplot and lines
color_scatter = 'C0'  # 'C0' for default color #0
color_line = 'C3'     # 'C1' for default color #1

ax.fill_between(
    x,
    np.percentile(samples['mu'], 5, axis=0),
    np.percentile(samples['mu'], 95, axis=0),
    # lighten color_line
    color=1 - 0.4*(1 - np.array(mpl.colors.to_rgb(color_line)))
)
ax.plot(
    x,
    np.percentile(samples['mu'], 50, axis=0),
    color=color_line,
    linewidth=2,
    alpha=0.8
)
ax.scatter(x, y, 10, color=color_scatter)
ax.set_xlabel('year')
ax.set_ylabel('summer temp. @Kilpisjarvi')
ax.set_xlim((1951.5, 2013.5))

az.plot_posterior(fit_lin_std, var_names=['beta','sigma','ypred'], credible_interval=0.95, round_to=2, kind='hist', bins=35)
plt.xlabel('y-prediction for x={}'.format(xpred));


Now we can also plot ridgeplot for mu samples. Ridge plot better illustrates the fact that each year is represented by distribution for mu parameter.


In [109]:
# Make complementary ridge plot
fig, axes = az.plot_forest(fit_lin_std,
                           kind='ridgeplot',
                           var_names=['mu'],
                           combined=True,
                           r_hat = False,
                           n_eff = True,
                           ridgeplot_overlap=5,
                           ridgeplot_alpha=0.7,
                           colors='lightblue',
                           figsize=(9, 7))
# Let's adjust labels (default arviz labels are not so good for large amount of distributions)
axes[0].set_yticklabels(
    [str(year) if year % 2 == 0 else '' for year in range(2013, 1952-1, -1)])
axes[0].set_ylabel('Year')
axes[0].set_xlabel('Average temperature')
# make figure compact
fig.tight_layout();


Linear Student's t model.

The temperatures used in the above analyses are averages over three months, which makes it more likely that they are normally distributed, but there can be extreme events in the weather and we can check whether more robust Student's t observation model would give different results. For the Student's t distribution we also need the degree of freedom, nu, as an additional parameter.


In [110]:
with open('lin_t.stan') as file:
    print(file.read())


// Linear student-t model
data {
  int<lower=0> N; // number of data points
  vector[N] x; //
  vector[N] y; //
  real xpred; // input location for prediction
}
parameters {
  real alpha;
  real beta;
  real<lower=0> sigma;
  real<lower=1, upper=80> nu;
}
transformed parameters {
  vector[N] mu;
  mu = alpha + beta*x;
}
model {
  nu ~ gamma(2, 0.1); // Juarez and Steel(2010)
  y ~ student_t(nu, mu, sigma);
}
generated quantities {
  real ypred;
  vector[N] log_lik;
  ypred = normal_rng(alpha + beta*xpred, sigma);
  for (i in 1:N)
    log_lik[i] = student_t_lpdf(y[i] | nu, mu[i], sigma);
}


In [111]:
model = stan_utility.compile_model('lin_t.stan')
fit_lin_t = model.sampling(data=data, seed=194838)
samples = fit.extract(permuted=True)


Using cached StanModel
WARNING:pystan:108 of 4000 iterations saturated the maximum tree depth of 10 (2.7%)
WARNING:pystan:Run again with max_treedepth larger than 10 to avoid saturation

Check the n_eff and Rhat


In [112]:
print(fit_lin_t)


Inference for Stan model: anon_model_985e43a8fe28088e3317d00edd324bab.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

              mean se_mean     sd   2.5%    25%    50%    75%  97.5%  n_eff   Rhat
alpha       -32.55    0.44  16.05 -64.15 -43.72 -32.53  -21.6  -0.96   1343    1.0
beta          0.02  2.2e-4 8.1e-3 5.2e-3   0.02   0.02   0.03   0.04   1342    1.0
sigma         1.08  2.6e-3   0.11   0.88    1.0   1.08   1.15   1.32   1827    1.0
nu           23.97    0.33  13.37   5.72  14.18   21.2  30.96  57.83   1670    1.0
mu[1]         8.67  7.1e-3   0.29   8.11   8.47   8.66   8.87   9.24   1684    1.0
mu[2]         8.69  6.9e-3   0.28   8.14    8.5   8.69   8.89   9.25   1701    1.0
mu[3]         8.71  6.6e-3   0.28   8.18   8.53   8.71    8.9   9.25   1721    1.0
mu[4]         8.73  6.4e-3   0.27   8.21   8.55   8.73   8.92   9.26   1742    1.0
mu[5]         8.75  6.2e-3   0.26   8.25   8.58   8.75   8.94   9.26   1765    1.0
mu[6]         8.78  6.0e-3   0.26   8.28    8.6   8.77   8.95   9.27   1790    1.0
mu[7]          8.8  5.8e-3   0.25   8.31   8.63    8.8   8.97   9.28   1817    1.0
mu[8]         8.82  5.6e-3   0.24   8.35   8.65   8.82   8.99   9.28   1848    1.0
mu[9]         8.84  5.4e-3   0.24   8.38   8.68   8.84    9.0   9.29   1881    1.0
mu[10]        8.86  5.2e-3   0.23   8.41    8.7   8.86   9.02    9.3   1918    1.0
mu[11]        8.88  5.0e-3   0.22   8.44   8.73   8.88   9.04   9.31   1959    1.0
mu[12]         8.9  4.8e-3   0.22   8.48   8.75    8.9   9.05   9.32   2004    1.0
mu[13]        8.92  4.7e-3   0.21   8.51   8.78   8.92   9.07   9.33   2054    1.0
mu[14]        8.94  4.5e-3   0.21   8.54    8.8   8.94   9.09   9.34   2110    1.0
mu[15]        8.97  4.3e-3    0.2   8.57   8.83   8.97    9.1   9.35   2172    1.0
mu[16]        8.99  4.1e-3   0.19    8.6   8.86   8.99   9.12   9.36   2241    1.0
mu[17]        9.01  3.9e-3   0.19   8.63   8.88   9.01   9.14   9.37   2317    1.0
mu[18]        9.03  3.7e-3   0.18   8.66   8.91   9.03   9.15   9.38   2401    1.0
mu[19]        9.05  3.6e-3   0.18   8.69   8.93   9.05   9.17    9.4   2496    1.0
mu[20]        9.07  3.4e-3   0.17   8.72   8.96   9.07   9.19   9.41   2600    1.0
mu[21]        9.09  3.3e-3   0.17   8.75   8.98   9.09   9.21   9.42   2715    1.0
mu[22]        9.11  3.1e-3   0.17   8.78   9.01   9.12   9.23   9.43   2841    1.0
mu[23]        9.14  3.0e-3   0.16   8.81   9.03   9.14   9.24   9.45   2978    1.0
mu[24]        9.16  2.8e-3   0.16   8.83   9.05   9.16   9.26   9.46   3123    1.0
mu[25]        9.18  2.7e-3   0.16   8.86   9.07   9.18   9.28   9.48   3274    1.0
mu[26]         9.2  2.6e-3   0.15   8.89    9.1    9.2    9.3    9.5   3427    1.0
mu[27]        9.22  2.5e-3   0.15   8.92   9.12   9.22   9.32   9.52   3613    1.0
mu[28]        9.24  2.4e-3   0.15   8.94   9.14   9.24   9.34   9.53   3814    1.0
mu[29]        9.26  2.3e-3   0.15   8.97   9.16   9.26   9.36   9.55   3991    1.0
mu[30]        9.28  2.3e-3   0.15   8.99   9.19   9.29   9.38   9.57   4125    1.0
mu[31]         9.3  2.2e-3   0.14   9.02   9.21   9.31    9.4   9.59   4201    1.0
mu[32]        9.33  2.2e-3   0.14   9.04   9.23   9.33   9.42   9.61   4209    1.0
mu[33]        9.35  2.2e-3   0.14   9.06   9.25   9.35   9.44   9.63   4146    1.0
mu[34]        9.37  2.3e-3   0.15   9.08   9.27   9.37   9.46   9.66   4022    1.0
mu[35]        9.39  2.4e-3   0.15    9.1   9.29   9.39   9.49   9.68   3850    1.0
mu[36]        9.41  2.5e-3   0.15   9.12   9.31   9.41   9.51   9.71   3648    1.0
mu[37]        9.43  2.6e-3   0.15   9.14   9.33   9.43   9.53   9.73   3432    1.0
mu[38]        9.45  2.7e-3   0.15   9.16   9.35   9.45   9.55   9.76   3216    1.0
mu[39]        9.47  2.8e-3   0.16   9.17   9.37   9.48   9.57   9.79   3054    1.0
mu[40]        9.49  2.9e-3   0.16   9.19   9.39   9.49    9.6   9.81   2903    1.0
mu[41]        9.52  3.1e-3   0.16   9.21    9.4   9.51   9.62   9.84   2763    1.0
mu[42]        9.54  3.2e-3   0.17   9.22   9.42   9.54   9.64   9.87   2634    1.0
mu[43]        9.56  3.4e-3   0.17   9.24   9.44   9.56   9.67    9.9   2517    1.0
mu[44]        9.58  3.6e-3   0.17   9.25   9.46   9.58   9.69   9.93   2404    1.0
mu[45]         9.6  3.7e-3   0.18   9.26   9.48    9.6   9.72   9.96   2302    1.0
mu[46]        9.62  3.9e-3   0.18   9.27    9.5   9.62   9.74   9.99   2212    1.0
mu[47]        9.64  4.1e-3   0.19   9.28   9.51   9.64   9.76  10.02   2132    1.0
mu[48]        9.66  4.3e-3   0.19    9.3   9.53   9.66   9.79  10.05   2062    1.0
mu[49]        9.68  4.5e-3    0.2    9.3   9.55   9.68   9.81  10.08   1999    1.0
mu[50]        9.71  4.7e-3   0.21   9.32   9.56    9.7   9.84  10.12   1943    1.0
mu[51]        9.73  4.9e-3   0.21   9.33   9.58   9.72   9.87  10.15   1893    1.0
mu[52]        9.75  5.0e-3   0.22   9.34    9.6   9.74   9.89  10.18   1848    1.0
mu[53]        9.77  5.2e-3   0.22   9.34   9.61   9.76   9.92  10.22   1808    1.0
mu[54]        9.79  5.4e-3   0.23   9.35   9.63   9.79   9.94  10.25   1772    1.0
mu[55]        9.81  5.6e-3   0.24   9.36   9.65   9.81   9.97  10.29   1740    1.0
mu[56]        9.83  5.9e-3   0.24   9.37   9.67   9.83   9.99  10.32   1711    1.0
mu[57]        9.85  6.1e-3   0.25   9.38   9.68   9.85  10.02  10.35   1685    1.0
mu[58]        9.87  6.3e-3   0.26   9.38    9.7   9.87  10.04  10.39   1661    1.0
mu[59]         9.9  6.5e-3   0.26   9.39   9.72   9.89  10.07  10.42   1640    1.0
mu[60]        9.92  6.7e-3   0.27    9.4   9.73   9.91   10.1  10.45   1620    1.0
mu[61]        9.94  6.9e-3   0.28    9.4   9.75   9.93  10.12  10.49   1602    1.0
mu[62]        9.96  7.1e-3   0.28   9.41   9.77   9.95  10.15  10.52   1586    1.0
ypred         10.0    0.02   1.11   7.81   9.25  10.01  10.77  12.16   3687    1.0
log_lik[1]   -1.11  3.4e-3   0.14  -1.43  -1.19   -1.1  -1.01  -0.87   1758    1.0
log_lik[2]   -3.05    0.01   0.53  -4.17  -3.39   -3.0  -2.66  -2.12   1816    1.0
log_lik[3]   -1.26  4.2e-3   0.18  -1.67  -1.36  -1.23  -1.13  -0.97   1868    1.0
log_lik[4]   -1.22  4.1e-3   0.17  -1.61  -1.33  -1.21   -1.1  -0.94   1772    1.0
log_lik[5]   -1.23  4.0e-3   0.17  -1.62  -1.34  -1.22  -1.11  -0.95   1792    1.0
log_lik[6]   -1.55  5.6e-3   0.24  -2.09   -1.7  -1.53  -1.38  -1.16   1829    1.0
log_lik[7]   -1.05  2.6e-3   0.11  -1.29  -1.12  -1.05  -0.98  -0.84   1873    1.0
log_lik[8]   -1.07  2.5e-3   0.12  -1.32  -1.14  -1.06  -0.99  -0.87   2040    1.0
log_lik[9]   -2.96    0.01   0.45  -3.94  -3.26  -2.92  -2.64  -2.18   1949    1.0
log_lik[10]  -1.71  5.6e-3   0.25  -2.25  -1.87  -1.69  -1.53   -1.3   1950    1.0
log_lik[11]  -1.76  5.5e-3   0.24  -2.29  -1.91  -1.73  -1.58  -1.34   1999    1.0
log_lik[12]  -1.03  2.4e-3    0.1  -1.25   -1.1  -1.03  -0.96  -0.83   1887    1.0
log_lik[13]  -1.21  3.1e-3   0.14   -1.5  -1.29   -1.2  -1.11  -0.96   2003    1.0
log_lik[14]  -2.33  6.8e-3   0.31  -2.99  -2.53  -2.31  -2.11  -1.78   2135    1.0
log_lik[15]  -1.08  2.4e-3   0.11  -1.31  -1.15  -1.07   -1.0  -0.88   2057    1.0
log_lik[16]  -1.04  2.4e-3    0.1  -1.26  -1.11  -1.04  -0.97  -0.84   1910    1.0
log_lik[17]   -1.9  4.8e-3   0.23  -2.39  -2.04  -1.88  -1.73   -1.5   2305    1.0
log_lik[18]  -1.97  5.1e-3   0.24  -2.51  -2.12  -1.95   -1.8  -1.57   2284    1.0
log_lik[19]  -2.63  7.0e-3   0.33  -3.36  -2.84  -2.59  -2.38  -2.06   2273    1.0
log_lik[20]  -1.03  2.4e-3    0.1  -1.24   -1.1  -1.03  -0.97  -0.84   1901    1.0
log_lik[21]  -3.03  7.9e-3   0.38  -3.85  -3.27   -3.0  -2.76  -2.38   2335    1.0
log_lik[22]  -1.38  2.5e-3   0.14  -1.67  -1.46  -1.37  -1.28  -1.14   2858    1.0
log_lik[23]  -1.44  2.6e-3   0.14  -1.74  -1.53  -1.43  -1.34  -1.19   2976    1.0
log_lik[24]  -3.99  9.6e-3   0.49  -5.04  -4.31  -3.96  -3.65  -3.11   2636    1.0
log_lik[25]  -1.45  2.5e-3   0.14  -1.74  -1.54  -1.44  -1.36  -1.21   3061    1.0
log_lik[26]  -1.31  2.2e-3   0.12  -1.55  -1.39  -1.31  -1.23  -1.09   2938    1.0
log_lik[27]  -1.05  2.3e-3    0.1  -1.25  -1.12  -1.05  -0.98  -0.87   1943    1.0
log_lik[28]  -1.22  1.9e-3   0.11  -1.44  -1.28  -1.21  -1.15  -1.02   3121    1.0
log_lik[29]  -1.81  3.1e-3   0.18   -2.2  -1.92   -1.8  -1.69   -1.5   3309    1.0
log_lik[30]  -2.24  4.2e-3   0.23  -2.73  -2.39  -2.23  -2.08  -1.82   3003    1.0
log_lik[31]  -2.14  3.9e-3   0.22  -2.59  -2.27  -2.12  -1.98  -1.75   3119    1.0
log_lik[32]  -1.69  2.5e-3   0.16  -2.02  -1.78  -1.68  -1.58  -1.41   3715    1.0
log_lik[33]  -1.42  2.1e-3   0.12  -1.68   -1.5  -1.42  -1.34   -1.2   3615    1.0
log_lik[34]  -1.07  2.2e-3    0.1  -1.27  -1.13  -1.07   -1.0  -0.88   2068    1.0
log_lik[35]  -1.02  2.4e-3    0.1  -1.23  -1.09  -1.02  -0.96  -0.83   1801    1.0
log_lik[36]  -2.87  6.3e-3   0.32  -3.55  -3.08  -2.85  -2.64   -2.3   2660    1.0
log_lik[37]  -1.36  2.0e-3   0.12  -1.61  -1.44  -1.35  -1.28  -1.14   3533    1.0
log_lik[38]  -1.03  2.3e-3    0.1  -1.23  -1.09  -1.03  -0.96  -0.83   1835    1.0
log_lik[39]  -1.33  2.1e-3   0.12  -1.58  -1.41  -1.32  -1.25  -1.11   3182    1.0
log_lik[40]  -1.06  2.2e-3    0.1  -1.26  -1.13  -1.06  -0.99  -0.87   2039    1.0
log_lik[41]  -1.14  2.3e-3   0.11  -1.37  -1.21  -1.14  -1.07  -0.94   2191    1.0
log_lik[42]  -1.11  2.4e-3   0.11  -1.33  -1.18   -1.1  -1.03  -0.91   2083    1.0
log_lik[43]  -1.02  2.4e-3    0.1  -1.23  -1.09  -1.02  -0.95  -0.82   1794    1.0
log_lik[44]  -1.37  2.9e-3   0.14  -1.67  -1.46  -1.36  -1.27  -1.13   2302    1.0
log_lik[45]  -1.06  2.3e-3    0.1  -1.27  -1.13  -1.06  -0.99  -0.86   2004    1.0
log_lik[46]  -1.37  3.0e-3   0.15  -1.68  -1.47  -1.37  -1.27  -1.12   2401    1.0
log_lik[47]  -1.05  2.4e-3   0.11  -1.27  -1.12  -1.05  -0.98  -0.85   1900    1.0
log_lik[48]  -1.23  2.9e-3   0.13  -1.51  -1.31  -1.22  -1.13  -0.99   2071    1.0
log_lik[49]  -1.24  3.1e-3   0.14  -1.54  -1.32  -1.23  -1.14   -1.0   2039    1.0
log_lik[50]  -1.03  2.4e-3    0.1  -1.24  -1.09  -1.02  -0.96  -0.83   1844    1.0
log_lik[51]  -2.24  6.9e-3   0.31  -2.88  -2.43  -2.22  -2.02  -1.69   1997    1.0
log_lik[52]  -1.44  4.1e-3   0.18  -1.83  -1.55  -1.42  -1.31  -1.13   1966    1.0
log_lik[53]  -1.08  2.6e-3   0.11  -1.32  -1.15  -1.08   -1.0  -0.87   1916    1.0
log_lik[54]  -1.49  4.6e-3    0.2  -1.93  -1.62  -1.47  -1.35  -1.15   1887    1.0
log_lik[55]  -1.19  3.3e-3   0.14   -1.5  -1.28  -1.18  -1.09  -0.94   1880    1.0
log_lik[56]  -1.13  3.1e-3   0.13  -1.42  -1.22  -1.12  -1.04   -0.9   1857    1.0
log_lik[57]  -1.53  5.5e-3   0.23  -2.05  -1.68  -1.51  -1.37  -1.16   1745    1.0
log_lik[58]  -1.04  2.5e-3   0.11  -1.27  -1.11  -1.04  -0.97  -0.83   1937    1.0
log_lik[59]  -1.57  6.1e-3   0.25  -2.14  -1.73  -1.54  -1.39  -1.17   1704    1.0
log_lik[60]  -1.39  5.1e-3   0.21  -1.88  -1.52  -1.37  -1.24  -1.05   1707    1.0
log_lik[61]  -1.83  7.7e-3   0.32  -2.53  -2.02   -1.8   -1.6  -1.31   1656    1.0
log_lik[62]  -1.62  6.7e-3   0.28  -2.25  -1.79   -1.6  -1.42  -1.17   1680    1.0
lp__        -56.54    0.04   1.47 -60.15  -57.3 -56.18 -55.45 -54.71   1362    1.0

Samples were drawn using NUTS at Wed Dec 19 11:20:12 2018.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

Again without standardization we get smaller n_eff's, but large enough for practical purposes.

Check the treedepth, E-BFMI, and divergences


In [113]:
stan_utility.check_treedepth(fit_lin_t)
stan_utility.check_energy(fit_lin_t)
stan_utility.check_div(fit_lin_t)


108 of 4000 iterations saturated the maximum tree depth of 10 (2.7%)
Run again with max_depth set to a larger value to avoid saturation
0.0 of 4000 iterations ended with a divergence (0.0%)

We see again many iterations saturating the maximum tree depth, which is harmful for the efficiency but doesn't invalidate the results.

Compute the probability that the summer temperature is increasing.


In [114]:
samples = fit_lin_t.extract(permuted=True)
print('Pr(beta > 0) = {}'.format(np.mean(samples['beta'] > 0)))


Pr(beta > 0) = 0.9965

We get similar probability as with Gaussian obervation model.

Plot the data, the model fit, and the marginal posteriors for sigma and nu.


In [115]:
figsize = plt.rcParams['figure.figsize'].copy()
figsize[0] *= 2.2  # width
figsize[1] *= 1.2  # width
fig, ax = plt.subplots(1, 1, figsize=figsize)

# plot 1: scatterplot and lines
color_scatter = 'C0'  # 'C0' for default color #0
color_line = 'C3'     # 'C1' for default color #1

ax.fill_between(
    x,
    np.percentile(samples['mu'], 5, axis=0),
    np.percentile(samples['mu'], 95, axis=0),
    # lighten color_line
    color=1 - 0.4*(1 - np.array(mpl.colors.to_rgb(color_line)))
)
ax.plot(
    x,
    np.percentile(samples['mu'], 50, axis=0),
    color=color_line,
    linewidth=2,
    alpha=0.8
)
ax.scatter(x, y, 10, color=color_scatter)
ax.set_xlabel('year')
ax.set_ylabel('summer temp. @Kilpisjarvi')
ax.set_xlim((1951.5, 2013.5))

az.plot_posterior(fit_lin_t, var_names=['beta','sigma'], credible_interval=0.95, round_to=2, kind='hist', bins=35)
az.plot_posterior(fit_lin_t, var_names=['ypred', 'nu'], credible_interval=0.95, round_to=2, kind='hist', bins=35)

plt.xlabel('y-prediction for x={}'.format(xpred));
# make figure compact
fig.tight_layout();


The posterior of the degree of freedom "nu" reveals that Gaussian model is likely to be ok, as Student's t with nu > 20 is very close to Gaussian.

Pareto-smoothed importance-sampling leave-one-out cross-validation (PSIS-LOO)

We can use leave-one-out cross-validation to compare the expected predictive performance. For the following three lines to execute, the log-likelihood needs to be evaluated in the stan code. For an example, see:


In [131]:
# Convert stan objects to ArviZ inference data objects
inference_lin = az.convert_to_inference_data(fit_lin, log_likelihood='log_lik')
inference_lin_t = az.convert_to_inference_data(fit_lin_t, log_likelihood='log_lik')

# Store inference data objects to dictionary
models = dict([
         ("linear_model", inference_lin),
         ("linear_model_t", inference_lin_t)
             ])

# Compare models with loo
comparison = az.compare(models, ic='loo')
print(comparison)


                    loo     ploo      dloo weight       se       dse warning
linear_model    192.957  2.80436         0      1  10.0009         0       0
linear_model_t  193.434   2.7311  0.477703      0  10.1958  0.847232       0

Loo values are similar for models. Hence, there is no practical difference between Gaussian and Student’s t observation model for this data.

Comparison of k groups with hierarchical models

In following, for each year, we analyze the monthly average temperatures for three summer months: June, July and August. In previous example, we used the average of these three summer months as estimate for yearly average summer temperarute. Now we would like to know, if there is a difference between three summer months: June, July and August.


In [117]:
data_path = os.path.abspath(
    os.path.join(
        os.path.pardir,
        'utilities_and_data',
        'kilpisjarvi-summer-temp.csv'
    )
)
# Load data
d = np.loadtxt(data_path, dtype=np.double, delimiter=';', skiprows=1)
# summer months are numbered from 1 to 3
x = np.tile(np.arange(1, 4), d.shape[0]) # create array with repeating numbers 1,2,3 for each year
y = d[:, 1:4].ravel()
N = len(x)
data = dict(
    N = N,
    K = 3,  # 3 groups
    x = x,  # group indicators
    y = y)  # observations

Common variance (ANOVA) model


In [118]:
with open('grp_aov.stan') as file:
    print(file.read())


// Comparison of k groups with common variance (ANOVA)
data {
  int<lower=0> N; // number of data points
  int<lower=0> K; // number of groups
  int<lower=1,upper=K> x[N]; // group indicator
  vector[N] y; //
}
parameters {
  vector[K] mu;        // group means
  real<lower=0> sigma; // common std
}
model {
  y ~ normal(mu[x], sigma);
}

Fit the model


In [119]:
model = stan_utility.compile_model('grp_aov.stan')
fit = model.sampling(data=data, seed=194838)
samples = fit.extract(permuted=True)


Using cached StanModel

Check the n_eff and Rhat values for sampled model.


In [120]:
print(fit)


Inference for Stan model: anon_model_7564a28d1f0313ff396284979a4361d6.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

        mean se_mean     sd   2.5%    25%    50%    75%  97.5%  n_eff   Rhat
mu[1]   7.54  2.9e-3    0.2   7.15   7.41   7.54   7.67   7.92   4644    1.0
mu[2]  10.97  2.9e-3    0.2  10.58  10.83  10.97   11.1  11.35   4738    1.0
mu[3]   9.44  2.7e-3   0.19   9.06    9.3   9.43   9.57   9.82   4937    1.0
sigma   1.53  1.2e-3   0.08   1.38   1.47   1.53   1.58    1.7   4344    1.0
lp__  -170.9    0.03    1.4 -174.3 -171.6 -170.5 -169.8 -169.1   1957    1.0

Samples were drawn using NUTS at Wed Dec 19 11:20:18 2018.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

Check the treedepth, E-BFMI, and divergences


In [121]:
stan_utility.check_treedepth(fit)
stan_utility.check_energy(fit)
stan_utility.check_div(fit)


0 of 4000 iterations saturated the maximum tree depth of 10 (0.0%)
0.0 of 4000 iterations ended with a divergence (0.0%)

Plot group mean distributions and matrix of probabilities that one mu is larger than other.


In [122]:
# matrix of probabilities that one mu is larger than other
mu = fit.extract(permuted=True)['mu']
ps = np.zeros((3, 3))
for k1 in range(3):
    for k2 in range(k1+1, 3):
        ps[k1, k2] = np.mean(mu[:, k1] > mu[:, k2])
        ps[k2, k1] = 1 - ps[k1, k2]
print("Matrix of probabilities that one mu is larger than other:")
print(ps)

months = ['June', 'July', 'August']
fig, ax = az.plot_forest(fit,
                           kind='ridgeplot',
                           var_names=['mu'],
                           combined=True,
                           ridgeplot_overlap=1,
                           r_hat=False,
                           n_eff=False,
                           figsize=(7, 7))
ax[0].set_yticklabels(months[::-1]) # Order of months if reversed, since arviz flips the order of ticks in plot
ax[0].set_xlabel('Average temperature');


Matrix of probabilities that one mu is larger than other:
[[0. 0. 0.]
 [1. 0. 1.]
 [1. 0. 0.]]

Common variance and hierarchical prior for mean.

Results do not differ much from the previous, because there is only few groups and quite much data per group, but this works as an example of a hierarchical model.


In [123]:
with open('grp_prior_mean.stan') as file:
    print(file.read())


// Comparison of k groups with common variance and
// hierarchical prior for the mean
data {
    int<lower=0> N; // number of data points
    int<lower=0> K; // number of groups
    int<lower=1,upper=K> x[N]; // group indicator
    vector[N] y; //
}
parameters {
    real mu0;             // prior mean
    real<lower=0> sigma0; // prior std
    vector[K] mu;         // group means
    real<lower=0> sigma;  // common std
}
model {
  mu0 ~ normal(10,10);      // weakly informative prior
  sigma0 ~ cauchy(0,4);     // weakly informative prior
  mu ~ normal(mu0, sigma0); // population prior with unknown parameters
  sigma ~ cauchy(0,4);      // weakly informative prior
  y ~ normal(mu[x], sigma);
}

Fit the model


In [124]:
model = stan_utility.compile_model('grp_prior_mean.stan')
fit = model.sampling(data=data, seed=194838)
samples = fit.extract(permuted=True)


Using cached StanModel

Plot group mean distributions and matrix of probabilities that one mu is larger than other. The probabilities are practically either 1 or 0 since the distributions are not overlapping almost at all. This can be visually seen from ridge plot below.


In [125]:
# matrix of probabilities that one mu is larger than other
mu = fit.extract(permuted=True)['mu']
ps = np.zeros((3, 3))
for k1 in range(3):
    for k2 in range(k1+1, 3):
        ps[k1, k2] = np.mean(mu[:, k1] > mu[:, k2])
        ps[k2, k1] = 1 - ps[k1, k2]
print("Matrix of probabilities that one mu is larger than other:")
print(ps)

months = ['June', 'July', 'August']
fig, ax = az.plot_forest(fit,
                           kind='ridgeplot',
                           var_names=['mu'],
                           combined=True,
                           ridgeplot_overlap=1,
                           r_hat=False,
                           n_eff=False,
                           figsize=(7, 7))
ax[0].set_yticklabels(months[::-1])
ax[0].set_xlabel('Average temperature');


Matrix of probabilities that one mu is larger than other:
[[0. 0. 0.]
 [1. 0. 1.]
 [1. 0. 0.]]

Unequal variance and hierarchical prior for mean and variance

Results do not differ much from the previous, because there is only few groups and quite much data per group, but this works as an example of a hierarchical model.


In [126]:
with open('grp_prior_mean_var.stan') as file:
    print(file.read())


// Comparison of k groups with unequal variance and
// hierarchical priors for the mean and the variance
data {
  int<lower=0> N; // number of data points
  int<lower=0> K; // number of groups
  int<lower=1,upper=K> x[N]; // group indicator
  vector[N] y; //
}
parameters {
  real mu0;                 // prior mean
  real<lower=0> musigma0;   // prior std
  vector[K] mu;             // group means
  real lsigma0;             // prior mean
  real<lower=0> lsigma0s;   // prior std
  vector<lower=0>[K] sigma; // group stds
}
model {
  mu0 ~ normal(10, 10);       // weakly informative prior
  musigma0 ~ cauchy(0,10);    // weakly informative prior
  mu ~ normal(mu0, musigma0); // population prior with unknown parameters
  lsigma0 ~ normal(0,1);      // weakly informative prior
  lsigma0s ~ normal(0,1);     // weakly informative prior
  sigma ~ cauchy(lsigma0, lsigma0s); // population prior with unknown parameters
  y ~ normal(mu[x], sigma[x]);
}

Fit the model


In [127]:
model = stan_utility.compile_model('grp_prior_mean_var.stan')
fit = model.sampling(data=data, seed=194838)
samples = fit.extract(permuted=True)


Using cached StanModel
WARNING:pystan:1 of 4000 iterations ended with a divergence (0.025%).
WARNING:pystan:Try running with adapt_delta larger than 0.8 to remove the divergences.

Plot group mean distributions and matrix of probabilities that one mu is larger than other:


In [128]:
samples = fit.extract(permuted=True)
print("std(mu0): {:.2f}".format(np.std(samples['mu0'])))
mu = samples['mu']

# matrix of probabilities that one mu is larger than other
ps = np.zeros((3, 3))
for k1 in range(3):
    for k2 in range(k1+1, 3):
        ps[k1, k2] = np.mean(mu[:, k1] > mu[:, k2])
        ps[k2, k1] = 1 - ps[k1, k2]
print("Matrix of probabilities that one mu is larger than other:")
print(ps)

months = ['June', 'July', 'August']
fig, ax = az.plot_forest(fit,
                           kind='ridgeplot',
                           var_names=['mu'],
                           combined=True,
                           ridgeplot_overlap=1,
                           r_hat=False,
                           n_eff=False,
                           figsize=(7, 7))
ax[0].set_yticklabels(months[::-1])
ax[0].set_xlabel('Average temperature');


std(mu0): 2.55
Matrix of probabilities that one mu is larger than other:
[[0. 0. 0.]
 [1. 0. 1.]
 [1. 0. 0.]]