Example 2: Predictivist Bayes Take On A Gaussian Likelihood With Known Variance.


In [2]:
using Gadfly
using Distributions;

Here we have $X_i \sim N(\theta,1)$ with $\theta \sim N(\mu_0, \phi_0^{-1})$, which admits predictions like


In [15]:
PosteriorPredictiveRule2 = function(pastX::Array{Float64})
    μ_0 = 0.
    ϕ_0 = 1.
    
    n = size(pastX,1)
    if n > 0
        ϕ_t = ϕ_0 + n # the precision of the posterior of θ
        μ_t = (μ_0 * ϕ_0 + sum(pastX)) / (ϕ_t) # the mean of the posterior of θ
    else
        ϕ_t = ϕ_0
        μ_t = μ_0
    end
    
    x_new = rand(Normal(μ_t, sqrt((1 + ϕ_t^-1)^-1)))
    
    return x_new
end;

Repeated simulation from this predictive rule should produce sample paths whose averages constitute draws from the standard posterior distribution on $\theta$ [cf https://en.wikipedia.org/wiki/Conjugate_prior]. If we're not actually conditioning on any observed data, this distribution is $N(\theta | \mu_0, \phi_0+1)$ with $\mu_0$ and $\phi_0$ as above.


In [16]:
# define the terms of the forward simulations
horizon = 1000
nits = 5000;
sims = zeros(Float64,horizon, nits);

In [17]:
# simulate out to the forecast horizon for a number of separate Monte Carlo iterations
for jj = 1:nits
    for ii = 1:horizon
        if ii > 1
            sims[ii,jj] = PosteriorPredictiveRule2(sims[1:(ii-1), jj])
        else
            sims[ii,jj] = PosteriorPredictiveRule2(Float64[])
        end
    end
end;

# calculate the sample means of each sample path
samplemeans = mean(sims,1)

In [20]:
plot(layer(x=samplemeans, Geom.histogram(density=true, bincount=50), order=1),
layer(x=-3:.1:3, y=pdf(Normal(0, 2^-.5), -3:.1:3), Geom.line, Theme(default_color=colorant"grey"), order=2),
Guide.xlabel("θ"), Guide.ylabel("density"))


Out[20]:
θ -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -9.0 -8.8 -8.6 -8.4 -8.2 -8.0 -7.8 -7.6 -7.4 -7.2 -7.0 -6.8 -6.6 -6.4 -6.2 -6.0 -5.8 -5.6 -5.4 -5.2 -5.0 -4.8 -4.6 -4.4 -4.2 -4.0 -3.8 -3.6 -3.4 -3.2 -3.0 -2.8 -2.6 -2.4 -2.2 -2.0 -1.8 -1.6 -1.4 -1.2 -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 3.8 4.0 4.2 4.4 4.6 4.8 5.0 5.2 5.4 5.6 5.8 6.0 6.2 6.4 6.6 6.8 7.0 7.2 7.4 7.6 7.8 8.0 8.2 8.4 8.6 8.8 9.0 -10 -5 0 5 10 -9.0 -8.5 -8.0 -7.5 -7.0 -6.5 -6.0 -5.5 -5.0 -4.5 -4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 -1 0 1 2 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 density

The samples from the posterior provide a good approximation to the analytical posterior distribution shown in grey. This result is alluded to but there's no corresponding figure in the paper. End of example the second.