Solutions for http://quant-econ.net/rational_expectations.html

```
In [1]:
```%matplotlib inline

Common imports for the solutions

```
In [2]:
```from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt

We'll use the LQ class from quantecon

```
In [4]:
```from quantecon import LQ

To map a problem into a discounted optimal linear control problem, we need to define

state vector $x_t$ and control vector $u_t$

matrices $A, B, Q, R$ that define preferences and the law of motion for the state

For the state and control vectors we choose

$$ x_t = \begin{bmatrix} y_t \\ Y_t \\ 1 \end{bmatrix}, \qquad u_t = y_{t+1} - y_{t} $$For $, B, Q, R$ we set

$$ A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \kappa_1 & \kappa_0 \\ 0 & 0 & 1 \end{bmatrix}, \quad B = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} , \quad R = \begin{bmatrix} 0 & a_1/2 & -a_0/2 \\ a_1/2 & 0 & 0 \\ -a_0/2 & 0 & 0 \end{bmatrix}, \quad Q = \gamma / 2 $$By multiplying out you can confirm that

$x_t' R x_t + u_t' Q u_t = - r_t$

$x_{t+1} = A x_t + B u_t$

We'll use the module `lqcontrol.py`

to solve the firm's problem at the stated parameter values

This will return an LQ policy $F$ with the interpretation $u_t = - F x_t$, or

$$ y_{t+1} - y_t = - F_0 y_t - F_1 Y_t - F_2 $$Matching parameters with $y_{t+1} = h_0 + h_1 y_t + h_2 Y_t$ leads to

$$ h_0 = -F_2, \quad h_1 = 1 - F_0, \quad h_2 = -F_1 $$Here's our solution

```
In [5]:
```# == Model parameters == #
a0 = 100
a1 = 0.05
beta = 0.95
gamma = 10.0
# == Beliefs == #
kappa0 = 95.5
kappa1 = 0.95
# == Formulate the LQ problem == #
A = np.array([[1, 0, 0], [0, kappa1, kappa0], [0, 0, 1]])
B = np.array([1, 0, 0])
B.shape = 3, 1
R = np.array([[0, a1/2, -a0/2], [a1/2, 0, 0], [-a0/2, 0, 0]])
Q = 0.5 * gamma
# == Solve for the optimal policy == #
lq = LQ(Q, R, A, B, beta=beta)
P, F, d = lq.stationary_values()
F = F.flatten()
out1 = "F = [{0:.3f}, {1:.3f}, {2:.3f}]".format(F[0], F[1], F[2])
h0, h1, h2 = -F[2], 1 - F[0], -F[1]
out2 = "(h0, h1, h2) = ({0:.3f}, {1:.3f}, {2:.3f})".format(h0, h1, h2)
print(out1)
print(out2)

```
```

The implication is that

$$ y_{t+1} = 96.949 + y_t - 0.046 \, Y_t $$For the case $n > 1$, recall that $Y_t = n y_t$, which, combined with the previous equation, yields

$$ Y_{t+1} = n \left( 96.949 + y_t - 0.046 \, Y_t \right) = n 96.949 + (1 - n 0.046) Y_t $$To determine whether a $\kappa_0, \kappa_1$ pair forms the aggregate law of motion component of a rational expectations equilibrium, we can proceed as follows:

Determine the corresponding firm law of motion $y_{t+1} = h_0 + h_1 y_t + h_2 Y_t$

Test whether the associated aggregate law :$Y_{t+1} = n h(Y_t/n, Y_t)$ evaluates to $Y_{t+1} = \kappa_0 + \kappa_1 Y_t$

In the second step we can use $Y_t = n y_t = y_t$, so that $Y_{t+1} = n h(Y_t/n, Y_t)$ becomes

$$ Y_{t+1} = h(Y_t, Y_t) = h_0 + (h_1 + h_2) Y_t $$Hence to test the second step we can test $\kappa_0 = h_0$ and $\kappa_1 = h_1 + h_2$

The following code implements this test

```
In [6]:
```candidates = (
(94.0886298678, 0.923409232937),
(93.2119845412, 0.984323478873),
(95.0818452486, 0.952459076301)
)
for kappa0, kappa1 in candidates:
# == Form the associated law of motion == #
A = np.array([[1, 0, 0], [0, kappa1, kappa0], [0, 0, 1]])
# == Solve the LQ problem for the firm == #
lq = LQ(Q, R, A, B, beta=beta)
P, F, d = lq.stationary_values()
F = F.flatten()
h0, h1, h2 = -F[2], 1 - F[0], -F[1]
# == Test the equilibrium condition == #
if np.allclose((kappa0, kappa1), (h0, h1 + h2)):
print('Equilibrium pair =', kappa0, kappa1)
print('(h0, h1, h2) = ', h0, h1, h2)
break

```
```

The output tells us that the answer is pair (iii), which implies $(h_0, h_1, h_2) = (95.0819, 1.0000, -.0475)$

(Notice we use `np.allclose`

to test equality of floating point numbers, since exact equality is too strict)

Regarding the iterative algorithm, one could loop from a given $(\kappa_0, \kappa_1)$ pair to the associated firm law and then to a new $(\kappa_0, \kappa_1)$ pair

This amounts to implementing the operator $\Phi$ described in the lecture

(There is in general no guarantee that this iterative process will converge to a rational expectations equilibrium)

We are asked to write the planner problem as an LQ problem

For the state and control vectors we choose

$$ x_t = \begin{bmatrix} Y_t \\ 1 \end{bmatrix}, \quad u_t = Y_{t+1} - Y_{t} $$For the LQ matrices we set

$$ A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \quad B = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad R = \begin{bmatrix} a_1/2 & -a_0/2 \\ -a_0/2 & 0 \end{bmatrix}, \quad Q = \gamma / 2 $$By multiplying out you can confirm that

$x_t' R x_t + u_t' Q u_t = - s(Y_t, Y_{t+1})$

$x_{t+1} = A x_t + B u_t$

By obtaining the optimal policy and using $u_t = - F x_t$ or

$$ Y_{t+1} - Y_t = -F_0 Y_t - F_1 $$we can obtain the implied aggregate law of motion via $\kappa_0 = -F_1$ and $\kappa_1 = 1-F_0$

The Python code to solve this problem is below:

```
In [7]:
```# == Formulate the planner's LQ problem == #
A = np.array([[1, 0], [0, 1]])
B = np.array([[1], [0]])
R = np.array([[a1 / 2, -a0 / 2], [-a0 / 2, 0]])
Q = gamma / 2
# == Solve for the optimal policy == #
lq = LQ(Q, R, A, B, beta=beta)
P, F, d = lq.stationary_values()
# == Print the results == #
F = F.flatten()
kappa0, kappa1 = -F[1], 1 - F[0]
print(kappa0, kappa1)

```
```

The monopolist's LQ problem is almost identical to the planner's problem from the previous exercise, except that

$$ R = \begin{bmatrix} a_1 & -a_0/2 \\ -a_0/2 & 0 \end{bmatrix} $$The problem can be solved as follows

```
In [8]:
```A = np.array([[1, 0], [0, 1]])
B = np.array([[1], [0]])
R = np.array([[a1, -a0 / 2], [-a0 / 2, 0]])
Q = gamma / 2
lq = LQ(Q, R, A, B, beta=beta)
P, F, d = lq.stationary_values()
F = F.flatten()
m0, m1 = -F[1], 1 - F[0]
print(m0, m1)

```
```

We see that the law of motion for the monopolist is approximately $Y_{t+1} = 73.4729 + 0.9265 Y_t$

In the rational expectations case the law of motion was approximately $Y_{t+1} = 95.0818 + 0.9525 Y_t$

One way to compare these two laws of motion is by their fixed points, which give long run equilibrium output in each case

For laws of the form $Y_{t+1} = c_0 + c_1 Y_t$, the fixed point is $c_0 / (1 - c_1)$

If you crunch the numbers, you will see that the monopolist adopts a lower long run quantity than obtained by the competitive market, implying a higher market price

This is analogous to the elementary static-case results

```
In [ ]:
```