In [3]:
# RESEARCH IN PYTHON: INSTRUMENTAL VARIABLES ESTIMATION
# by J. NATHAN MATIAS March 18, 2015
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
This section is taken from Chapter 10 of Methods Matter by Richard Murnane and John Willett. The descriptions are taken from Wikipedia, for copyright reasons.
In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment.
In linear models, there are two main requirements for using an IV:
Can we use college attainment (COLLEGE) to predict the probability of civic engagement (REGISTER)? College attainment is not randomized, and the arrow of causality may move in the opposite direction, so all we can do with standard regression is to establish a correlation.
In this example, we use an Instrumental Variable of distance between the student's school and a community college (DISTANCE), to estimate a causal relationship. This is possible only if this variable is related to college attainment and NOT related to the residuals of regressing COLLEGE on REGISTER.
The python code listed here is roughly parallel to the code listed in the textbook example for Methods Matter Chapter 10. If you're curious about how to do a similar example in R, check out "A Simple Instrumental Variables Problem" by Adam Hyland in R-Bloggers or Ani Katchova's "Instrumental Variables in R video on YouTube.
In [2]:
# THINGS TO IMPORT
# This is a baseline set of libraries I import by default if I'm rushed for time.
import codecs # load UTF-8 Content
import json # load JSON files
import pandas as pd # Pandas handles dataframes
import numpy as np # Numpy handles lots of basic maths operations
import matplotlib.pyplot as plt # Matplotlib for plotting
import seaborn as sns # Seaborn for beautiful plots
from dateutil import * # I prefer dateutil for parsing dates
import math # transformations
import statsmodels.formula.api as smf # for doing statistical regression
import statsmodels.api as sm # access to the wider statsmodels library, including R datasets
from collections import Counter # Counter is useful for grouping and counting
import scipy
In [29]:
import urllib2
import os.path
if(os.path.isfile("dee.dta")!=True):
response = urllib2.urlopen("http://www.ats.ucla.edu/stat/stata/examples/methods_matter/chapter10/dee.dta")
if(response.getcode()==200):
f = open("dee.dta","w")
f.write(response.read())
f.close()
dee_df = pd.read_stata("dee.dta")
In [30]:
dee_df[['register','college', 'distance']].describe()
Out[30]:
In [31]:
print pd.crosstab(dee_df.register, dee_df.college)
chi2 = scipy.stats.chi2_contingency(pd.crosstab(dee_df.register, dee_df.college))
print "chi2: %(c)d" % {"c":chi2[0]}
print "p: %(p)0.03f" % {"p":chi2[1]}
print "df: %(df)0.03f" % {"df":chi2[2]}
print "expected:"
print chi2[3]
In [32]:
sns.corrplot(dee_df[['register','college','distance']])
Out[32]:
In [33]:
result = smf.ols(formula = "register ~ college", data = dee_df).fit()
print result.summary()
In two-stage least squares regression, we regress COLLEGE on DISTANCE and use the predictions from that model as the predictors for REGISTER.
In [34]:
print "=============================================================================="
print " FIRST STAGE"
print "=============================================================================="
result = smf.ols(formula = "college ~ distance", data = dee_df).fit()
print result.summary()
dee_df['college_fitted'] = result.predict()
print
print
print "=============================================================================="
print " SECOND STAGE"
print "=============================================================================="
result = smf.ols(formula = "register ~ college_fitted", data=dee_df).fit()
print result.summary()
^^^^^^ Not sure what's going on with the R2 statistic here (it's 0.001 here, versus 0.022 in the example), although everything else matches what we see from the Stata output in the published example
In the case of the covariate of race/ethicity, we expect that there might be a relationship between race/ethnicity and distance to a community college, as well as a relationship between race/ethnicity and voter registration.
While race/ethnicity fails the test for instrumental variables, it can still be included as a covariate in a multiple regression model. In such cases, it is essential to include covariates at both stages of a two-stage test.
In [35]:
sns.corrplot(dee_df[['register','college','distance', 'black','hispanic','otherrace']])
Out[35]:
In [36]:
print "=============================================================================="
print " FIRST STAGE"
print "=============================================================================="
result = smf.ols(formula = "college ~ distance + black + hispanic + otherrace", data = dee_df).fit()
print result.summary()
dee_df['college_fitted'] = result.predict()
print
print
print "=============================================================================="
print " SECOND STAGE"
print "=============================================================================="
result = smf.ols(formula = "register ~ college_fitted + black + hispanic + otherrace", data=dee_df).fit()
print result.summary()
In this case, we explore whether interactions between college and race/ethnicity are significant predictors of voter registration. Here, it's important to meet the "rank condition": that "for every endogenous predictor included in the second stage, there must be at least one instrument included in the first stage."
To do this, we need to create a series of stage-one instruments, one for the main effect, and one for each interaction. In
In [37]:
print "=============================================================================="
print " FIRST STAGE"
print "=============================================================================="
# generate the stage one main effect instrument
result = smf.ols(formula = "college ~ distance + black + hispanic + otherrace +" +
"distance:black + distance:hispanic + distance:otherrace", data = dee_df).fit()
dee_df['college_fitted'] = result.predict()
print result.summary()
# generate the stage one interaction instrument for distance:black
# note that we have DROPPED the irrelevant terms.
# The full form for each interaction, which gives the exact same result, is:
# result = smf.ols(formula = "college:black ~ distance + black + hispanic + otherrace +" +
# "distance:black + distance:hispanic + distance:otherrace", data = dee_df).fit()
result = smf.ols(formula = "college:black ~ distance + black + distance:black", data = dee_df).fit()
dee_df['collegeXblack'] = result.predict()
# generate the stage one interaction instrument for distance:hispanic
result = smf.ols(formula = "college:hispanic ~ distance + hispanic + distance:hispanic", data = dee_df).fit()
dee_df['collegeXhispanic'] = result.predict()
# generate the stage one interaction instrument for distance:hispanic
result = smf.ols(formula = "college:otherrace ~ distance + otherrace + distance:otherrace", data = dee_df).fit()
dee_df['collegeXotherrace'] = result.predict()
# generate the final model, that includes these interactions as predictors
result = smf.ols(formula = "register ~ college_fitted + black + hispanic + otherrace +" +
"collegeXblack + collegeXhispanic + collegeXotherrace", data = dee_df).fit()
print result.summary()
^^^ in this particular case, we find no significant interactions and fall back on our previous model, which simply included race/ethnicity as a covariate
In this example, we use a logistic model with a two-stage least-squares regression. NOTE: This is not attempted in the textbook example, so I cannot be completely certain about this, unlike the above results.
In [38]:
print "=============================================================================="
print " FIRST STAGE"
print "=============================================================================="
result = smf.glm(formula = "college ~ distance + black + hispanic + otherrace",
data=dee_df,
family=sm.families.Binomial()).fit()
print result.summary()
dee_df['college_fitted'] = result.predict()
print
print
print "=============================================================================="
print " SECOND STAGE"
print "=============================================================================="#
result = smf.glm(formula = "register ~ college_fitted + black + hispanic + otherrace",
data=dee_df,
family=sm.families.Binomial()).fit()
print result.summary()
In this example, we use a probit model with a two-stage least-squares regression. NOTE: This is not attempted in the textbook example, so I cannot be completely certain about this, unlike the above results.
In [39]:
import patsy
print "=============================================================================="
print " FIRST STAGE"
print "=============================================================================="
a,b = patsy.dmatrices("college ~ distance + black + hispanic + otherrace",
dee_df,return_type="dataframe")
result = sm.Probit(a,b).fit()
print result.summary()
dee_df['college_fitted'] = result.predict()
print
print
print "=============================================================================="
print " SECOND STAGE"
print "=============================================================================="#
a,b = patsy.dmatrices("register ~ college_fitted + black + hispanic + otherrace",
dee_df,return_type="dataframe")
result = sm.Probit(a,b).fit()
print result.summary()
In [ ]: