Exercise 3 (5 points)

Consider the AR(6) process

$$ X_t =a_1X_{t−1}+a_2X_{t−2}+...+a_6X_{t−6}+ε_t, $$

where $t ≥0, \quad X_0 \sim N(0,1), \quad a =[3.9515, -7.8885, 9.7340, -7.7435, 3.8078, -0.9472], \quad {ε_t}$ is a Gaussian white noise process (zero mean, unit variance).

  1. Show that the process is stationary by establishing that it is stable based on the analysis of the companion matrix (augmented Markov representation) of the process.
  2. (Optional) Compute and plot the (truncated) impulse response function up to the first 600 time steps.
  3. Simulate the process via direct iteration of the difference equation; Generate a single realization of the process and throw away the first 500 transient samples keeping only the last N = 1024 samples. Plot the sample path.
  4. (Optional) Simulate the process via convolution of the above impulse response with the noise sequence. For comparison purposes, use reflective boundary conditions for iteration of the difference equation and zeros for the convolution. Plot the sample paths from obtained via direct iteration and convolution, using the same input noise sequence.
  5. Compute and plot the biased and unbiased autocovariance and autocorrelation functions. Why does the biased case approach zero? (See Kramer & Eden, Chapter 3.) Plot the approximate 95% confidence interval for the autocorrelation function.
  6. From the simulated sample path, estimate the AR(6) process coefficients using the Burg maximum entropy method (see example in the warm-up code).
  7. (Optional) Try different model orders AR(p), p =1, ..., 20. Determine the model order using the Bayesian Information Criterion (BIC). Plot the BIC as a function of the model order.
  8. (Optional) Plot the partial autocorrelation function up to lag 20 and the corresponding approximate 95% confidence interval.
  9. Whiteness test: compute and plot the autocorrelation function of the residual error and the corresponding approximate 95% confidence interval for the estimated AR(6).

In [1]:
using PlotlyJS
using StatsBase


Plotly javascript loaded.

To load again call

init_notebook(true)

Question 1:

  1. Show that the process is stationary by establishing that it is stable based on the analysis of the companion matrix (augmented Markov representation) of the process.

R: The process is stationary since the spectral radius of the companion matrix is less than 1. See code below


In [2]:
p =6
a = [3.9515, -7.8885, 9.7340, -7.7435, 3.8078, -0.9472]
b = [eye(p-1) zeros(p-1)]


#companion matrix
A = [a';b]
D, V = eig(A)

#stationary?
if maximum(abs.(D)) < 1.0
    println("Process is Stationary")
else
    println("Process is NOT Stationary")
end


Process is Stationary

Question 3:

  1. Simulate the process via direct iteration of the difference equation; Generate a single realization of the process and throw away the first 500 transient samples keeping only the last N = 1024 samples. Plot the sample path.

In [3]:
T = 1524
σ = 1.0
X = zeros(Float64, T)
time = 501:T

for t=7:T
    for i=1:p
        X[t] += a[i]X[t-i]
    end
    X[t]+=σ*randn();
end

In [101]:
#Plot the path
layout = Layout(;title="AR(6)",
                     legend=attr(family="Arial, sans-serif", size=20,
                                 color="grey"))
time = 501:T
p2 = plot(X[time], layout,  line_width=0.8, marker_color="blue")


Out[101]:

Question 5

  1. Compute and plot the biased and unbiased autocovariance and autocorrelation functions. Why does the biased case approach zero? (See Kramer & Eden, Chapter 3.) Plot the approximate 95% confidence interval for the autocorrelation function.

R: For plots see below.

_R: In the biased case, the acf goes to zero because as the lag increases the number of overlapping terms, that is the number of terms in the summation decrease and therefore the numerator goes to zero, while the denominator remains constant (N). In the unbiased case, the denominator also decreases at as the lag increases.


In [57]:
X_trunc = X[time]
N = length(X_trunc)
acf = xcorr(X_trunc, X_trunc)[N:end]
acf_biased = (1.0/N).*acf
acorr_biased = acf_biased./acf_biased[1]
# alternatively
# lags = collect(1:1000)
# acf_biased = autocov(X_trunc, lags)

c = 1./(ones(Float64, N).*N - vec(0:N-1))
acf_unbiased = c.*acf
acorr_unbiased = acf_unbiased./acf_unbiased[1]

#confidence intervals autocorrelation
vcrit = sqrt(2)*erfinv(0.95)
lconf = -vcrit/sqrt(N);
upconf = vcrit/sqrt(N);

In [59]:
upconf


Out[59]:
0.06124887451687682

In [99]:
#Plot acf
layout = Layout(;title="Biased Autocovariance Function - AR(6)",
                     legend=attr(family="Arial, sans-serif", size=20,
                                 color="grey"))
# time = 501:T
plot(acf_biased, layout,  line_width=0.8, marker_color="blue")


Out[99]:

In [98]:
#Plot acf
layout = Layout(;title="Unbiased Autocovariance Function - AR(6)",
                     legend=attr(family="Arial, sans-serif", size=20,
                                 color="grey"))
time = 501:T
p2 = plot(acf_unbiased, layout,  line_width=0.8, marker_color="blue")


Out[98]:

In [67]:
#Plot acf
layout = Layout(;title="Biased Autocorrelation Function - AR(6) with 95% CI",
                     legend=attr(family="Arial, sans-serif", size=20,
                                 color="grey"))
# time = 501:T
# p2 = plot(acorr_biased, layout,  line_width=0.5)

#traces
lconf_trace = PlotlyJS.scatter(; y= ones(Float64, length(acorr_biased)).*lconf, marker_color="green", line_width=0.8, name="Lower CI")
acorr_trace = PlotlyJS.scatter(;y=acorr_biased, marker_color="blue", name="ACF", line_width=0.8)
upconf_trace = PlotlyJS.scatter(; y= ones(Float64, length(acorr_biased)).*upconf, marker_color="green", line_width=0.8, name="Upper CI")

data = [lconf_trace, acorr_trace, upconf_trace]
#plot
PlotlyJS.plot(data, layout)


Out[67]:

In [102]:
#Plot acf
layout = Layout(;title="Unbiased Autocorralation Function - AR(6)",
                     legend=attr(family="Arial, sans-serif", size=20,
                                 color="grey"))
time = 501:T
p2 = plot(acorr_biased, layout,  line_width=0.8, marker_color="blue")


Out[102]:

Question 6

  1. From the simulated sample path, estimate the AR(6) process coefficients using the Burg maximum entropy method (see example in the warm-up code).

_R: Julia doesn't have a built-in implementation. Found this .... is this correct? Not sure what P corresponds to


In [87]:
function arburg(x,p)
    N = length(x)

    P = zeros(1,p+1)
    f = zeros(p+1,N)
    b = zeros(p+1,N)
    a = zeros(p,p)

    # Initialisation
    f[1,:] = x
    b[1,:] = x
    P[1] = var(x)

    for m = 1:p
        numer,denom = 0,0
        for n=0:N-m-1
            numer += b[m,n+1]*f[m,n+2]
            denom += f[m,n+2]^2 + b[m,n+1]^2
        end
        a[m,m] = -2*numer/denom
        for i = 1:m-1
            a[m,i] = a[m-1,i] + a[m,m]*a[m-1,m-i]
        end
        P[m+1] = P[m]*(1-a[m,m]^2)
        for n=0:N-m-1
            f[m+1,n+1] = f[m,n+2] + a[m,m]*b[m,n+1]
            b[m+1,n+1] = b[m,n+1] + a[m,m]*f[m,n+2]
        end
    end
    a[p,:], P[p]
end

a_hat,P_hat = arburg(X_trunc, p)
print(a_hat)


[-3.94058, 7.84736, -9.66293, 7.6724, -3.76728, 0.937366]

Question 9

  • Whiteness test: compute and plot the autocorrelation function of the residual error and the corresponding approximate 95% confidence interval for the estimated AR(6).

In [88]:
T = 1524
X_hat = zeros(Float64, T)
for t=7:T
    for i=1:p
        X_hat[t] += a_hat[i]X_hat[t-i]
    end
end

In [90]:
X_hat_trunc = X_hat[time]
residuals = X_trunc - X_hat_trunc;

In [96]:
#Plot acf
layout = Layout(;title="Autocorrelation of Residuals - AR(6) with 95% CI",
                     legend=attr(family="Arial, sans-serif", size=20,
                                 color="grey"))
# time = 501:T
acorr_res = autocor(residuals, collect(1:1023))
plot(acorr_res, layout,  line_width=0.5)

# #traces
lconf_trace = PlotlyJS.scatter(; y= ones(Float64, length(acorr_res)).*lconf, marker_color="green", line_width=0.8, name="Lower CI")
acorr_trace = PlotlyJS.scatter(;y=acorr_res, marker_color="blue", name="ACF", line_width=0.8)
upconf_trace = PlotlyJS.scatter(; y= ones(Float64, length(acorr_res)).*upconf, marker_color="green", line_width=0.8, name="Upper CI")

data = [lconf_trace, acorr_trace, upconf_trace]
#plot
PlotlyJS.plot(data, layout)


Out[96]:

In [ ]: