In [2]:
using Gadfly
using Distributions;
Here we have $X_i \sim N(\theta,1)$ with $\theta \sim N(\mu_0, \phi_0^{-1})$, which admits predictions like
In [15]:
PosteriorPredictiveRule2 = function(pastX::Array{Float64})
μ_0 = 0.
ϕ_0 = 1.
n = size(pastX,1)
if n > 0
ϕ_t = ϕ_0 + n # the precision of the posterior of θ
μ_t = (μ_0 * ϕ_0 + sum(pastX)) / (ϕ_t) # the mean of the posterior of θ
else
ϕ_t = ϕ_0
μ_t = μ_0
end
x_new = rand(Normal(μ_t, sqrt((1 + ϕ_t^-1)^-1)))
return x_new
end;
Repeated simulation from this predictive rule should produce sample paths whose averages constitute draws from the standard posterior distribution on $\theta$ [cf https://en.wikipedia.org/wiki/Conjugate_prior]. If we're not actually conditioning on any observed data, this distribution is $N(\theta | \mu_0, \phi_0+1)$ with $\mu_0$ and $\phi_0$ as above.
In [16]:
# define the terms of the forward simulations
horizon = 1000
nits = 5000;
sims = zeros(Float64,horizon, nits);
In [17]:
# simulate out to the forecast horizon for a number of separate Monte Carlo iterations
for jj = 1:nits
for ii = 1:horizon
if ii > 1
sims[ii,jj] = PosteriorPredictiveRule2(sims[1:(ii-1), jj])
else
sims[ii,jj] = PosteriorPredictiveRule2(Float64[])
end
end
end;
# calculate the sample means of each sample path
samplemeans = mean(sims,1)
In [20]:
plot(layer(x=samplemeans, Geom.histogram(density=true, bincount=50), order=1),
layer(x=-3:.1:3, y=pdf(Normal(0, 2^-.5), -3:.1:3), Geom.line, Theme(default_color=colorant"grey"), order=2),
Guide.xlabel("θ"), Guide.ylabel("density"))
Out[20]:
The samples from the posterior provide a good approximation to the analytical posterior distribution shown in grey. This result is alluded to but there's no corresponding figure in the paper. End of example the second.