Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
In [1]:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import random
import thinkstats2
import thinkplot
The following is a version of thinkstats2.HypothesisTest
with just the essential methods:
In [2]:
class HypothesisTest(object):
def __init__(self, data):
self.data = data
self.MakeModel()
self.actual = self.TestStatistic(data)
def PValue(self, iters=1000):
self.test_stats = [self.TestStatistic(self.RunModel())
for _ in range(iters)]
count = sum(1 for x in self.test_stats if x >= self.actual)
return count / iters
def TestStatistic(self, data):
raise UnimplementedMethodException()
def MakeModel(self):
pass
def RunModel(self):
raise UnimplementedMethodException()
And here's an example that uses it to compute the p-value of an experiment where we toss a coin 250 times and get 140 heads.
In [3]:
class CoinTest(HypothesisTest):
def TestStatistic(self, data):
heads, tails = data
test_stat = abs(heads - tails)
return test_stat
def RunModel(self):
heads, tails = self.data
n = heads + tails
sample = [random.choice('HT') for _ in range(n)]
hist = thinkstats2.Hist(sample)
data = hist['H'], hist['T']
return data
The p-value turns out to be about 7%, which is considered on the border of statistical significance.
In [4]:
ct = CoinTest((140, 110))
pvalue = ct.PValue()
pvalue
Out[4]:
In [5]:
class DiffMeansPermute(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
def MakeModel(self):
group1, group2 = self.data
self.n, self.m = len(group1), len(group2)
self.pool = np.hstack((group1, group2))
def RunModel(self):
np.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
Here's an example where we test the observed difference in pregnancy length for first babies and others.
In [6]:
import first
live, firsts, others = first.MakeFrames()
data = firsts.prglngth.values, others.prglngth.values
The p-value is about 17%, which means it is plausible that the observed difference is just the result of random sampling, and might not be generally true in the population.
In [7]:
ht = DiffMeansPermute(data)
pvalue = ht.PValue()
pvalue
Out[7]:
Here's the distrubution of the test statistic (the difference in means) over many simulated samples:
In [8]:
ht.PlotCdf()
thinkplot.Config(xlabel='test statistic',
ylabel='CDF')
Under the null hypothesis, we often see differences bigger than the observed difference.
In [9]:
class DiffMeansOneSided(DiffMeansPermute):
def TestStatistic(self, data):
group1, group2 = data
test_stat = group1.mean() - group2.mean()
return test_stat
If the hypothesis under test is that first babies come late, the appropriate test statistic is the raw difference between first babies and others, rather than the absolute value of the difference. In that case, the p-value is smaller, because we are testing a more specific hypothesis.
In [10]:
ht = DiffMeansOneSided(data)
pvalue = ht.PValue()
pvalue
Out[10]:
But in this example, the result is still not statistically significant.
In [11]:
class DiffStdPermute(DiffMeansPermute):
def TestStatistic(self, data):
group1, group2 = data
test_stat = group1.std() - group2.std()
return test_stat
In [12]:
ht = DiffStdPermute(data)
pvalue = ht.PValue()
pvalue
Out[12]:
But that's not statistically significant either.
In [13]:
class CorrelationPermute(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
xs, ys = data
test_stat = abs(thinkstats2.Corr(xs, ys))
return test_stat
def RunModel(self):
xs, ys = self.data
xs = np.random.permutation(xs)
return xs, ys
Here's an example testing the correlation between birth weight and mother's age.
In [14]:
cleaned = live.dropna(subset=['agepreg', 'totalwgt_lb'])
data = cleaned.agepreg.values, cleaned.totalwgt_lb.values
ht = CorrelationPermute(data)
pvalue = ht.PValue()
pvalue
Out[14]:
The reported p-value is 0, which means that in 1000 trials we didn't see a correlation, under the null hypothesis, that exceeded the observed correlation. That means that the p-value is probably smaller than $1/1000$, but it is not actually 0.
To get a sense of how unexpected the observed value is under the null hypothesis, we can compare the actual correlation to the largest value we saw in the simulations.
In [15]:
ht.actual, ht.MaxTestStat()
Out[15]:
In [16]:
class DiceTest(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
observed = data
n = sum(observed)
expected = np.ones(6) * n / 6
test_stat = sum(abs(observed - expected))
return test_stat
def RunModel(self):
n = sum(self.data)
values = [1, 2, 3, 4, 5, 6]
rolls = np.random.choice(values, n, replace=True)
hist = thinkstats2.Hist(rolls)
freqs = hist.Freqs(values)
return freqs
Here's an example using the data from the book:
In [17]:
data = [8, 9, 19, 5, 8, 11]
dt = DiceTest(data)
pvalue = dt.PValue(iters=10000)
pvalue
Out[17]:
The observed deviance from the expected values is not statistically significant.
By convention, it is more common to test data like this using the chi-squared statistic:
In [18]:
class DiceChiTest(DiceTest):
def TestStatistic(self, data):
observed = data
n = sum(observed)
expected = np.ones(6) * n / 6
test_stat = sum((observed - expected)**2 / expected)
return test_stat
Using this test, we get a smaller p-value:
In [19]:
dt = DiceChiTest(data)
pvalue = dt.PValue(iters=10000)
pvalue
Out[19]:
Taking this result at face value, we might consider the data statistically significant, but considering the results of both tests, I would not draw any strong conclusions.
In [20]:
class PregLengthTest(thinkstats2.HypothesisTest):
def MakeModel(self):
firsts, others = self.data
self.n = len(firsts)
self.pool = np.hstack((firsts, others))
pmf = thinkstats2.Pmf(self.pool)
self.values = range(35, 44)
self.expected_probs = np.array(pmf.Probs(self.values))
def RunModel(self):
np.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
def TestStatistic(self, data):
firsts, others = data
stat = self.ChiSquared(firsts) + self.ChiSquared(others)
return stat
def ChiSquared(self, lengths):
hist = thinkstats2.Hist(lengths)
observed = np.array(hist.Freqs(self.values))
expected = self.expected_probs * len(lengths)
stat = sum((observed - expected)**2 / expected)
return stat
If we specifically test the deviations of first babies and others from the expected number of births in each week of pregnancy, the results are statistically significant with a very small p-value. But at this point we have run so many tests, we should not be surprised to find at least one that seems significant.
In [21]:
data = firsts.prglngth.values, others.prglngth.values
ht = PregLengthTest(data)
p_value = ht.PValue()
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
In [22]:
def FalseNegRate(data, num_runs=1000):
"""Computes the chance of a false negative based on resampling.
data: pair of sequences
num_runs: how many experiments to simulate
returns: float false negative rate
"""
group1, group2 = data
count = 0
for i in range(num_runs):
sample1 = thinkstats2.Resample(group1)
sample2 = thinkstats2.Resample(group2)
ht = DiffMeansPermute((sample1, sample2))
p_value = ht.PValue(iters=101)
if p_value > 0.05:
count += 1
return count / num_runs
In [23]:
neg_rate = FalseNegRate(data)
neg_rate
Out[23]:
In this example, the false negative rate is 70%, which means that the power of the test (probability of statistical significance if the actual difference is 0.078 weeks) is only 30%.
Exercise: As sample size increases, the power of a hypothesis test increases, which means it is more likely to be positive if the effect is real. Conversely, as sample size decreases, the test is less likely to be positive even if the effect is real.
To investigate this behavior, run the tests in this chapter with different subsets of the NSFG data. You can use thinkstats2.SampleRows
to select a random subset of the rows in a DataFrame.
What happens to the p-values of these tests as sample size decreases? What is the smallest sample size that yields a positive test?
In [24]:
# Solution goes here
In [25]:
# Solution goes here
In [26]:
# Solution goes here
Exercise: In Section 9.3, we simulated the null hypothesis by permutation; that is, we treated the observed values as if they represented the entire population, and randomly assigned the members of the population to the two groups.
An alternative is to use the sample to estimate the distribution for the population, then draw a random sample from that distribution. This process is called resampling. There are several ways to implement resampling, but one of the simplest is to draw a sample with replacement from the observed values, as in Section 9.10.
Write a class named DiffMeansResample
that inherits from DiffMeansPermute
and overrides RunModel
to implement resampling, rather than permutation.
Use this model to test the differences in pregnancy length and birth weight. How much does the model affect the results?
In [27]:
# Solution goes here
In [28]:
# Solution goes here
In [29]:
# Solution goes here
In [30]:
# Solution goes here