Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
In [1]:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import pandas as pd
import random
import thinkstats2
import thinkplot
In [2]:
import first
live, firsts, others = first.MakeFrames()
Here's birth weight as a function of mother's age (which we saw in the previous chapter).
In [3]:
import statsmodels.formula.api as smf
formula = 'totalwgt_lb ~ agepreg'
model = smf.ols(formula, data=live)
results = model.fit()
results.summary()
Out[3]:
We can extract the parameters.
In [4]:
inter = results.params['Intercept']
slope = results.params['agepreg']
inter, slope
Out[4]:
And the p-value of the slope estimate.
In [5]:
slope_pvalue = results.pvalues['agepreg']
slope_pvalue
Out[5]:
And the coefficient of determination.
In [6]:
results.rsquared
Out[6]:
The difference in birth weight between first babies and others.
In [7]:
diff_weight = firsts.totalwgt_lb.mean() - others.totalwgt_lb.mean()
diff_weight
Out[7]:
The difference in age between mothers of first babies and others.
In [8]:
diff_age = firsts.agepreg.mean() - others.agepreg.mean()
diff_age
Out[8]:
The age difference plausibly explains about half of the difference in weight.
In [9]:
slope * diff_age
Out[9]:
Running a single regression with a categorical variable, isfirst
:
In [10]:
live['isfirst'] = live.birthord == 1
formula = 'totalwgt_lb ~ isfirst'
results = smf.ols(formula, data=live).fit()
results.summary()
Out[10]:
Now finally running a multiple regression:
In [11]:
formula = 'totalwgt_lb ~ isfirst + agepreg'
results = smf.ols(formula, data=live).fit()
results.summary()
Out[11]:
As expected, when we control for mother's age, the apparent difference due to isfirst
is cut in half.
If we add age squared, we can control for a quadratic relationship between age and weight.
In [12]:
live['agepreg2'] = live.agepreg**2
formula = 'totalwgt_lb ~ isfirst + agepreg + agepreg2'
results = smf.ols(formula, data=live).fit()
results.summary()
Out[12]:
When we do that, the apparent effect of isfirst
gets even smaller, and is no longer statistically significant.
These results suggest that the apparent difference in weight between first babies and others might be explained by difference in mothers' ages, at least in part.
In [13]:
import nsfg
live = live[live.prglngth>30]
resp = nsfg.ReadFemResp()
resp.index = resp.caseid
join = live.join(resp, on='caseid', rsuffix='_r')
And we can search for variables with explanatory power.
Because we don't clean most of the variables, we are probably missing some good ones.
In [14]:
import patsy
def GoMining(df):
"""Searches for variables that predict birth weight.
df: DataFrame of pregnancy records
returns: list of (rsquared, variable name) pairs
"""
variables = []
for name in df.columns:
try:
if df[name].var() < 1e-7:
continue
formula = 'totalwgt_lb ~ agepreg + ' + name
# The following seems to be required in some environments
# formula = formula.encode('ascii')
model = smf.ols(formula, data=df)
if model.nobs < len(df)/2:
continue
results = model.fit()
except (ValueError, TypeError):
continue
variables.append((results.rsquared, name))
return variables
In [15]:
variables = GoMining(join)
The following functions report the variables with the highest values of $R^2$.
In [16]:
import re
def ReadVariables():
"""Reads Stata dictionary files for NSFG data.
returns: DataFrame that maps variables names to descriptions
"""
vars1 = thinkstats2.ReadStataDct('2002FemPreg.dct').variables
vars2 = thinkstats2.ReadStataDct('2002FemResp.dct').variables
all_vars = vars1.append(vars2)
all_vars.index = all_vars.name
return all_vars
def MiningReport(variables, n=30):
"""Prints variables with the highest R^2.
t: list of (R^2, variable name) pairs
n: number of pairs to print
"""
all_vars = ReadVariables()
variables.sort(reverse=True)
for r2, name in variables[:n]:
key = re.sub('_r$', '', name)
try:
desc = all_vars.loc[key].desc
if isinstance(desc, pd.Series):
desc = desc[0]
print(name, r2, desc)
except (KeyError, IndexError):
print(name, r2)
Some of the variables that do well are not useful for prediction because they are not known ahead of time.
In [17]:
MiningReport(variables)
Combining the variables that seem to have the most explanatory power.
In [18]:
formula = ('totalwgt_lb ~ agepreg + C(race) + babysex==1 + '
'nbrnaliv>1 + paydu==1 + totincr')
results = smf.ols(formula, data=join).fit()
results.summary()
Out[18]:
In [19]:
y = np.array([0, 1, 0, 1])
x1 = np.array([0, 0, 0, 1])
x2 = np.array([0, 1, 1, 1])
According to the logit model the log odds for the $i$th element of $y$ is
$\log o = \beta_0 + \beta_1 x_1 + \beta_2 x_2 $
So let's start with an arbitrary guess about the elements of $\beta$:
In [20]:
beta = [-1.5, 2.8, 1.1]
Plugging in the model, we get log odds.
In [21]:
log_o = beta[0] + beta[1] * x1 + beta[2] * x2
log_o
Out[21]:
Which we can convert to odds.
In [22]:
o = np.exp(log_o)
o
Out[22]:
And then convert to probabilities.
In [23]:
p = o / (o+1)
p
Out[23]:
The likelihoods of the actual outcomes are $p$ where $y$ is 1 and $1-p$ where $y$ is 0.
In [24]:
likes = np.where(y, p, 1-p)
likes
Out[24]:
The likelihood of $y$ given $\beta$ is the product of likes
:
In [25]:
like = np.prod(likes)
like
Out[25]:
Logistic regression works by searching for the values in $\beta$ that maximize like
.
Here's an example using variables in the NSFG respondent file to predict whether a baby will be a boy or a girl.
In [26]:
import first
live, firsts, others = first.MakeFrames()
live = live[live.prglngth>30]
live['boy'] = (live.babysex==1).astype(int)
The mother's age seems to have a small effect.
In [27]:
model = smf.logit('boy ~ agepreg', data=live)
results = model.fit()
results.summary()
Out[27]:
Here are the variables that seemed most promising.
In [28]:
formula = 'boy ~ agepreg + hpagelb + birthord + C(race)'
model = smf.logit(formula, data=live)
results = model.fit()
results.summary()
Out[28]:
To make a prediction, we have to extract the exogenous and endogenous variables.
In [29]:
endog = pd.DataFrame(model.endog, columns=[model.endog_names])
exog = pd.DataFrame(model.exog, columns=model.exog_names)
The baseline prediction strategy is to guess "boy". In that case, we're right almost 51% of the time.
In [30]:
actual = endog['boy']
baseline = actual.mean()
baseline
Out[30]:
If we use the previous model, we can compute the number of predictions we get right.
In [31]:
predict = (results.predict() >= 0.5)
true_pos = predict * actual
true_neg = (1 - predict) * (1 - actual)
sum(true_pos), sum(true_neg)
Out[31]:
And the accuracy, which is slightly higher than the baseline.
In [32]:
acc = (sum(true_pos) + sum(true_neg)) / len(actual)
acc
Out[32]:
To make a prediction for an individual, we have to get their information into a DataFrame
.
In [33]:
columns = ['agepreg', 'hpagelb', 'birthord', 'race']
new = pd.DataFrame([[35, 39, 3, 2]], columns=columns)
y = results.predict(new)
y
Out[33]:
This person has a 51% chance of having a boy (according to the model).
Exercise: Suppose one of your co-workers is expecting a baby and you are participating in an office pool to predict the date of birth. Assuming that bets are placed during the 30th week of pregnancy, what variables could you use to make the best prediction? You should limit yourself to variables that are known before the birth, and likely to be available to the people in the pool.
In [34]:
import first
live, firsts, others = first.MakeFrames()
live = live[live.prglngth>30]
The following are the only variables I found that have a statistically significant effect on pregnancy length.
In [35]:
import statsmodels.formula.api as smf
model = smf.ols('prglngth ~ birthord==1 + race==2 + nbrnaliv>1', data=live)
results = model.fit()
results.summary()
Out[35]:
Exercise: The Trivers-Willard hypothesis suggests that for many mammals the sex ratio depends on “maternal condition”; that is, factors like the mother’s age, size, health, and social status. See https://en.wikipedia.org/wiki/Trivers-Willard_hypothesis
Some studies have shown this effect among humans, but results are mixed. In this chapter we tested some variables related to these factors, but didn’t find any with a statistically significant effect on sex ratio.
As an exercise, use a data mining approach to test the other variables in the pregnancy and respondent files. Can you find any factors with a substantial effect?
In [36]:
import regression
join = regression.JoinFemResp(live)
In [37]:
# Solution goes here
In [38]:
# Solution goes here
In [39]:
# Solution goes here
Exercise: If the quantity you want to predict is a count, you can use Poisson regression, which is implemented in StatsModels with a function called poisson
. It works the same way as ols
and logit
. As an exercise, let’s use it to predict how many children a woman has born; in the NSFG dataset, this variable is called numbabes
.
Suppose you meet a woman who is 35 years old, black, and a college graduate whose annual household income exceeds $75,000. How many children would you predict she has born?
In [40]:
# Solution goes here
In [41]:
# Solution goes here
Now we can predict the number of children for a woman who is 35 years old, black, and a college graduate whose annual household income exceeds $75,000
In [42]:
# Solution goes here
Exercise: If the quantity you want to predict is categorical, you can use multinomial logistic regression, which is implemented in StatsModels with a function called mnlogit
. As an exercise, let’s use it to guess whether a woman is married, cohabitating, widowed, divorced, separated, or never married; in the NSFG dataset, marital status is encoded in a variable called rmarital
.
Suppose you meet a woman who is 25 years old, white, and a high school graduate whose annual household income is about $45,000. What is the probability that she is married, cohabitating, etc?
In [43]:
# Solution goes here
Make a prediction for a woman who is 25 years old, white, and a high school graduate whose annual household income is about $45,000.
In [44]:
# Solution goes here
In [ ]: