Modelling On the Job Search

The implementation draws heavily from the material provided on the Quantitative Economics website.

Model Features:

  1. Job-specific human capital accumulation combined with on-the-job search
  2. Infinite horizon dynamic programming with one state variable and two controls

Model Setup:

  1. Let $x_{t}$ denote the time-t-job-specific human capital of a worker employed at a given firm

  2. Let $w_{t}$ denote current wages

  3. Let $w_{t}=x_{t}(1-s_{t}-\phi_{t})$ where

    $\phi_{t}$ is investment in job-specific human capital for the current role

    $s_{t}$ is search effort, devoted to obtaining new offers from other firms

  4. If the worker remains in the current job, evolution of $\{x_{t}\}$ is given by $x_{t+1}=G(x_{t},\phi_{t})$

  5. When search effort at t is $s_{t}$, the worker receives a new job offer with probability $\pi(s_{t})\in[0,1]$

  6. Value of offer is $U_{t+1}$, where $\{U_{t}\}$ is idd with common distribution F

  7. Worker has the right to reject the current offer and continue with existing job

  8. In particular, $x_{t+1}=U_{t+1}$ if accepts, and $x_{t+1}=G(x_{t},\phi_{t})$ if rejects.

The Bellman Equation:

$$V(x) = \underset{s+\phi<1}{\max}\{x(1-s-\phi)+\beta(1-\pi(s))V(G(x,\phi))+... +\beta\pi(s)\int V(\max\{G(x,\phi),u\})F(du)\} $$

Parameterizations:

$$G(x,\phi) = A(x\phi)^{\alpha} \\ \pi(s) = \sqrt{s} \\ F = Beta(2,2)$$

where: $$A = 1.4 \\ \alpha = 0.6 \\ \beta = 0.96 $$

Roadmap:

  1. Construct the Bellman operator
  2. Do value function iterations

Load Resources


In [1]:
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
from scipy.optimize import minimize
from scipy.integrate import fixed_quad as integrate
import time
from scipy import interp

Parameterization


In [2]:
# production function
A = 1.4
alpha = 0.6
G = lambda x, phi: A*(x*phi)**alpha

# discount factor
beta = 0.96

# tolerence
epsilon = 1e-4

# minimization method
method = "COBYLA"

# probability of having a new job offer (a function of search effort)
pi = np.sqrt

# distribution of the new job offer
F = stats.beta(2,2)

# x_grid
grid_size = 25
grid_max = max(A**(1/(1-alpha)), F.ppf(1-epsilon))
x_grid = np.linspace(epsilon, grid_max, grid_size)

Bellman Operator


In [3]:
def bellman_operator(V, brute_force=False, return_policies=False):
    """
    
    Parameters 
    ----------
    V: array_like(float)
       Array representing an approximate value function
    brute_force: bool, optional(default=False)
                 Default is False. If the brute_force flag is True, then grid
                 search is performed at each maximization step.
    return_policies: bool, optional(default=False)
                     Indicates whether to return just the updated value function TV or
                     both the greedy policy computed from V and TV
    
    Returns
    -------
    new_V: array_like(float)
           The updated value function Tv, as an array representing 
           the values TV(x) over x in x_grid.
    s_policy: array_like(float)
              The greedy policy computed from V. Only returned if return_policies == True
    """
    
    # set up
    Vf = lambda x: interp(x, x_grid, V)
    N = len(x_grid)
    new_V, s_policy, phi_policy = np.empty(N), np.empty(N), np.empty(N)
    a, b = F.ppf(0.005), F.ppf(0.995)
    c1 = lambda z: 1 - sum(z) # used to enforce s+phi <= 1
    c2 = lambda z: z[0] - epsilon # used to enforce s >= epsilon
    c3 = lambda z: z[1] - epsilon # used to enforce phi >= epsilon
    constraints = [{"type":"ineq","fun":i} for i in [c1, c2, c3]]
    guess = (0.2, 0.2)
    
    # solve r.h.s. of Bellman equation 
    
    for i, x in enumerate(x_grid):
        
        # set up objective function
        def w(z):
            s, phi = z
            h = lambda u: Vf(np.maximum(G(x,phi),u))*F.pdf(u)
            integral, err = integrate(h,a,b)
            q = pi(s)*integral + (1-pi(s))*Vf(G(x,phi))
            # minus because we minimize
            return -x*(1-s-phi) - beta*q
        
        # either use SciPy solver
        if not brute_force:
            max_s, max_phi = minimize(w, guess, constraints=constraints, method=method)["x"]
            max_val = -w((max_s,max_phi))
        
        # or search on a grid
        else:
            search_grid = np.linspace(epsilon, 1.0, 15)
            max_val = -1.0
            for s in search_grid:
                for phi in search_grid:
                    current_val = -w((s,phi)) if s + phi <= 1.0 else -1.0
                    if current_val > max_val:
                        max_val, max_s, max_phi = current_val, s, phi
        
        # store results
        new_V[i] = max_val
        s_policy[i], phi_policy[i] = max_s, max_phi
    
    if return_policies:
        return s_policy, phi_policy
    else:
        return new_V

Value Function Iterations


In [4]:
def compute_fixed_point(T, v, error_tol=1e-4, max_iter=50, verbose=1, print_skip=5, *args, **kwargs):
    """
    Computes and returns T^k v, an approximate fixed point
    Here T is an operator, v is an initial condition and k is the number of iterates.
    Provided that T is a contraction mapping or similar, T^k v will be an approximation to be fixed point.
    
    Parameters
    ----------
    T: callable
       function that acts on v
    v: object
       An object such that T(v) is defined
    error_tol: scaler(float), optional(default=1e-3)
               Error tolerance
    max_iter: scaler(int), optional(default=True)
              Maximum number of iterations
    verbose: bool, optional(default=True)
             If True, then print current error at each iterate.
    args, kwargs:
             Other arguments and keyword arguments that are passed directly to the 
             function T each time it is called.
    
    Returns
    -------
    v: object
       The approximate fixed point
    """
    
    iterate = 0
    error = error_tol + 1
    
    if verbose:
        start_time = time.time()
        msg = "{i:<11}{d:<10}{t:<10}".format(i="Iteration",
                                 d="Distance",
                                 t="Elapsed (seconds)") # < means left alighned
        print(msg)
        print("-"*len(msg))
    
    while iterate < max_iter and error > error_tol:
        new_v = T(v, *args, **kwargs)
        iterate += 1
        error = np.max(np.abs(new_v - v))
        
        if verbose & (iterate%print_skip==0):
            etime = time.time() - start_time
            msg = "{i:<11}{d:<10.3e}{t:<10.3e}".format(i=iterate,d=error,t=etime)
            print(msg)
        v = new_v
    return v

Solving the Model


In [5]:
# starting value
v_init = x_grid * 0.5

# determine fix point using minimize
V = compute_fixed_point(bellman_operator, v_init)
print(V[0:5])


-c:4: RuntimeWarning: invalid value encountered in double_scalars
-c:44: RuntimeWarning: invalid value encountered in sqrt
Iteration  Distance  Elapsed (seconds)
--------------------------------------
5          3.469e-01 1.731e+01 
10         2.959e-01 2.558e+01 
15         2.389e-01 3.413e+01 
20         2.000e-01 4.159e+01 
25         1.660e-01 4.965e+01 
30         1.399e-01 5.853e+01 
35         1.124e-01 6.855e+01 
40         9.336e-02 7.878e+01 
45         8.416e-02 9.000e+01 
50         7.681e-02 9.972e+01 
[ 8.34485292  8.34485292  8.34485424  8.62836112  8.73269308]

In [6]:
# starting value
v_init = x_grid * 0.5

# determine fix point using grid search
V = compute_fixed_point(bellman_operator, v_init, brute_force=True)
print(V[0:5])


Iteration  Distance  Elapsed (seconds)
--------------------------------------
5          3.416e-01 1.204e+01 
10         2.783e-01 2.180e+01 
15         2.268e-01 3.150e+01 
20         1.849e-01 4.554e+01 
25         1.506e-01 5.857e+01 
30         1.228e-01 6.945e+01 
35         1.001e-01 8.295e+01 
40         8.154e-02 9.691e+01 
45         6.645e-02 1.083e+02 
50         5.416e-02 1.186e+02 
[ 8.37111027  8.37800488  8.50121498  8.64509141  8.76706845]

Plotting


In [7]:
# determine optimal policy
s_policy, phi_policy = bellman_operator(V, return_policies=True)

# === plot policies === #
fig, ax = plt.subplots()
ax.set_xlim(0, max(x_grid))
ax.set_ylim(-0.1, 1.1)
ax.plot(x_grid, phi_policy, 'b-', label='phi')
ax.plot(x_grid, s_policy, 'g-', label='s')
ax.set_xlabel("x")
ax.legend()
plt.show()

Formatting


In [1]:
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1K5apRH').read())


Out[1]: