Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
In [1]:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
In [2]:
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/code/soln/utils.py
if not os.path.exists('figs'):
!mkdir figs
In [3]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from empiricaldist import Pmf, Cdf
from utils import decorate, savefig
In the previous chapter we computed distributions of sums, differences, products, and quotients.
In this chapter, we'll compute distributions of minima and maxima use them to solve inference problems. Then we'll look at distributions that are mixtures of other distributions, which will turn out to be particularly useful for making predictions.
But we'll start with a powerful tool for working with distributions, the cumulative distribution function.
So far we have been using probability mass functions to represent distributions. A useful alternative is the cumulative distribution function, or CDF.
As an example, I'll use the posterior distribution from the Euro problem, which we computed in Chapter 3.
Here's the uniform prior we started with.
In [4]:
hypos = np.linspace(0, 1, 101)
pmf = Pmf(1, hypos)
data = 140, 250
And here's the update.
In [5]:
from scipy.stats import binom
def update_binomial(pmf, data):
"""Update the PMF using the binomial distribution.
pmf: Pmf representing the prior
data: tuple of integers k and n
"""
k, n = data
xs = pmf.qs
likelihood = binom.pmf(k, n, xs)
pmf *= likelihood
pmf.normalize()
In [6]:
update_binomial(pmf, data)
The CDF is the cumulative sum of the PMF, so we can compute it like this:
In [7]:
cumulative = pmf.cumsum()
Here's what it looks like.
In [8]:
def decorate_euro(title):
decorate(xlabel='Proportion of heads (x)',
ylabel='Probability',
title=title)
In [9]:
cumulative.plot(label='CDF')
pmf.plot(label='PMF')
decorate_euro(title='Posterior distribution for the Euro problem')
savefig('fig06-01')
The range of the CDF is always from 0 to 1, in contrast with the PMF, where the maximum can be any probability.
The result is a Pandas Series, so we can use the bracket operator to select an element:
In [10]:
cumulative[0.61]
The result is about 0.96, which means that the total probability of all quantities less than or equal to 0.61 is 96%.
To go the other way --- to look up a probability and get the corresponding quantile --- we can use interpolation:
In [11]:
from scipy.interpolate import interp1d
ps = cumulative.values
qs = cumulative.index
interp = interp1d(ps, qs)
interp(0.96)
The result is about 0.61, so that confirms that the 96th percentile of this distribution is 0.61.
empiricaldist
provides a class called Cdf
that represents a cumulative distribution function.
Given a Pmf
, you can compute a Cdf
like this:
In [12]:
cdf = pmf.make_cdf()
make_cdf
uses np.cumsum
to compute the cumulative sum of the probabilities.
You can use brackets to select an element from a Cdf
:
In [13]:
cdf[0.61]
But if you look up a quantity that's not in the distribution, you get a KeyError
.
In [14]:
try:
cdf[0.615]
except KeyError as e:
print('KeyError')
You can also call a Cdf
as a function, using parentheses.
If the argument does not appear in the Cdf
, it interpolates between quantities.
In [15]:
cdf(0.615)
Going the other way, you can use quantile
to look up a cumulative probability and get the corresponding quantity:
In [16]:
cdf.quantile(0.9638303)
Cdf
also provides credible_interval
, which computes a credible interval that contains the given probability:
In [17]:
cdf.credible_interval(0.9)
CDFs and PMFs are equivalent in the sense that they contain the
same information about the distribution, and you can always convert
from one to the other.
Given a Cdf
, you can get the equivalent Pmf
like this:
In [18]:
pmf = cdf.make_pmf()
make_pmf
uses np.diff
to compute differences between consecutive cumulative probabilities.
One reason Cdf
objects are useful is that they compute quantiles efficiently.
Another is that they make it easy to compute the distribution of a maximum or minimum, as we'll see in the next section.
In Dungeons & Dragons, each character has six attributes: strength, intelligence, wisdom, dexterity, constitution, and charisma.
To generate a new character, players roll four 6-sided dice for each attribute and add up the best three. For example, if I roll for strength and get 1, 2, 3, 4 on the dice, my character's strength would be 9.
As an exercise, let's figure out the distribution of these attributes. Then, for each character, we'll figure out the distribution of their best attribute.
In the previous notebook, we computed the distribution of the sum of three dice like this:
In [19]:
def make_die(sides):
"""Pmf that represents a die with the given number of sides.
sides: int
returns: Pmf
"""
outcomes = np.arange(1, sides+1)
die = Pmf(1/sides, outcomes)
return die
In [20]:
def add_dist_seq(seq):
"""Distribution of sum of quantities from PMFs.
seq: sequence of Pmf objects
returns: Pmf
"""
total = seq[0]
for other in seq[1:]:
total = total.add_dist(other)
return total
In [21]:
die = make_die(6)
dice = [die] * 3
In [22]:
pmf_3d6 = add_dist_seq(dice)
Here's what that distribution looks like:
In [23]:
def decorate_dice(title=''):
decorate(xlabel='Outcome',
ylabel='PMF',
title=title)
In [24]:
pmf_3d6.plot()
decorate_dice('Distribution of attributes')
But if we roll four dice and add up the best three, computing the distribution of the sum is a bit more complicated. I'll estimate the distribution by simulating 10,000 rolls.
First I'll create an array of random values from 1 to 6, with 10,000 rows and 4 columns:
In [25]:
n = 10000
a = np.random.randint(1, 7, size=(n, 4))
To find the best three outcomes in each row, I'll sort along axis=1
, which means across the columns.
In [26]:
a.sort(axis=1)
Finally, I'll select the last three columns and add them up.
In [27]:
t = a[:, 1:].sum(axis=1)
Now t
is an array with a single column and 10,000 rows.
We can compute the PMF of the values in t
like this:
In [28]:
pmf_4d6 = Pmf.from_seq(t)
The following figure shows the distribution of the sum of three dice, pmf_3d6
, and the distribution of the best three out of four, pmf_4d6
.
In [29]:
pmf_3d6.plot(label='sum of 3 dice')
pmf_4d6.plot(label='best 3 of 4')
decorate_dice('Distribution of attributes')
savefig('fig06-02')
As you might expect, choosing the best three out of four tends to yield higher values.
Next we'll find the distribution for the maximum of six attributes, each the sum of the best three of four dice.
In [30]:
cdf_4d6 = pmf_4d6.make_cdf()
Recall that Cdf(x)
is the sum of probabilities for quantities less than or equal to x
.
Equivalently, it is the probability that a random value chosen from the distribution is less than or equal to x
.
Now suppose I draw 6 values from this distribution.
The probability that all 6 of them are less than or equal to x
is Cdf(x)
raised to the 6th power, which we can compute like this:
In [31]:
cdf_4d6**6
If all 6 values are less than or equal to x
, that means that their maximum is less than or equal to x
.
So the result is the CDF of their maximum.
We can convert it to a Cdf
object, like this:
In [32]:
from empiricaldist import Cdf
cdf_max6 = Cdf(cdf_4d6**6)
And compute the equivalent Pmf
like this:
In [33]:
pmf_max6 = cdf_max6.make_pmf()
The following figure shows the result.
In [34]:
pmf_max6.plot(label='max of 6 attributes')
decorate_dice('Distribution of attributes')
Most characters have at least one attribute greater than 12; almost 10\% of them have an 18.
The following figure shows the CDFs for the three distributions we have computed.
In [35]:
cdf_3d6 = pmf_3d6.make_cdf()
cdf_3d6.plot(label='best 3 of 4 dice')
cdf_4d6 = pmf_4d6.make_cdf()
cdf_4d6.plot(label='sum of 3 dice')
cdf_max6.plot(label='max of 6 attributes')
decorate_dice('Distribution of attributes')
plt.ylabel('CDF');
Pmf
and Cdf
provide max_dist
, which does the same computation.
We can compute the Pmf
of the maximum like this:
In [36]:
pmf_max_dist6 = pmf_4d6.max_dist(6)
And the Cdf
of the maximum like this:
In [37]:
cdf_max_dist6 = cdf_4d6.max_dist(6)
And we can confirm that the differences are small.
In [38]:
np.max(np.abs(pmf_max_dist6 - pmf_max6))
In [39]:
np.max(np.abs(cdf_max_dist6 - cdf_max6))
In the next section we'll find the distribution of the minimum. The process is similar, but a little more complicated. See if you can figure it out before you go on.
In [40]:
prob_gt = 1 - cdf_4d6
prob_gt
As the variable name suggests, the complementary CDF is the probability that a value from the distribution is greater than x
.
If we draw 6 values from the distribution, the probability that all 6 exceed x
is:
In [41]:
prob_gt6 = prob_gt**6
prob_gt6
If all 6 exceed x
, that means their minimum exceeds x
, so prob_gt6
is the complementary CDF of the minimum.
And that means we can compute the CDF of the minimum like this:
In [42]:
prob_le6 = 1-prob_gt6
prob_le6
The result is a Pandas Series that represents the CDF of the minimum of six attributes. We can put those values in a Cdf
object like this:
In [43]:
cdf_min6 = Cdf(prob_le6)
Here's what it looks like.
In [44]:
cdf_min6.plot(color='C2', label='minimum of 6')
cdf_max6.plot(color='C3', label='maximum of 6')
decorate_dice('Minimum and maximum of six attributes')
plt.ylabel('CDF')
savefig('fig06-03')
From the Cdf
we can make the corresponding Pmf
:
In [45]:
pmf_min6 = cdf_min6.make_pmf()
Pmf
and Cdf
provide min_dist
, which does the same computation.
We can compute the Pmf
of the minimum like this:
In [46]:
pmf_min_dist6 = pmf_4d6.min_dist(6)
And the Cdf
of the minimum like this:
In [47]:
cdf_min_dist6 = cdf_4d6.min_dist(6)
And we can confirm that the differences are small.
In [48]:
np.max(np.abs(pmf_min_dist6 - pmf_min6))
In [49]:
np.max(np.abs(cdf_min_dist6 - cdf_min6))
In the exercises at the end of this notebook, you'll use distributions of the minimum and maximum to do Bayesian inference. But first we'll see what happens when we mix distributions.
Let's do one more example inspired by Dungeons & Dragons. Suppose I have a 4-sided die and a 6-sided die. I choose one of them at random and roll it. What is the distribution of the outcome?
If you know which die it is, the answer is easy.
A die with n
sides yields a uniform distribution from 1 to n
, including both.
We can compute Pmf
objects to represent the dice, like this:
In [50]:
d4 = make_die(4)
d6 = make_die(6)
To compute the distribution of the mixture, we can compute the average of the two distributions by adding them and dividing the result by 2:
In [51]:
total = Pmf.add(d4, d6, fill_value=0) / 2
total
We have to use Pmf.add
with fill_value=0
because the two distributions don't have the same set of quantities.
If they did, we could use the +
operator.
Here's what the mixture of these distributions looks like.
In [52]:
mix = Pmf(total)
mix.bar(alpha=0.7)
decorate_dice()
Now suppose I have a 4-sided die and two 6-sided dice. Again, I choose one of them at random and roll it. What is the distribution of the outcome?
We can solve this problem by computing a weighted average of the distributions, like this:
In [53]:
total = Pmf.add(d4, 2*d6, fill_value=0) / 3
Here's what it looks like.
In [54]:
mix = Pmf(total)
mix.normalize()
mix.bar(alpha=0.7)
decorate_dice()
Finally, suppose we have a box with the following mix:
1 4-sided die
2 6-sided dice
3 8-sided dice
If I draw a die from this mix at random, we can use a Pmf
to represent the hypothetical number of sides on the die:
In [55]:
hypos = [4,6,8]
counts = [1,2,3]
pmf_dice = Pmf(counts, hypos)
pmf_dice.normalize()
pmf_dice
And I'll make a sequence of Pmf
objects to represent the dice:
In [56]:
dice = [make_die(sides) for sides in hypos]
Now we have to multiply each distribution in dice
by the corresponding probabilities in pmf_dice
.
To express this computation concisely, it is convenient to put the distributions into a Pandas DataFrame:
In [57]:
pd.DataFrame(dice)
The result is a DataFrame with one row for each distribution and one column for each possible outcome.
Not all rows are the same length, so Pandas fills the extra spaces with the special value NaN
, which stands for `not a number''.
We can use
fillnato replace the
NaN` values with 0.
In [58]:
pd.DataFrame(dice).fillna(0)
Before we multiply by the probabilities in pmf_dice
, we have to transpose the matrix so the distributions run down the columns rather than across the rows:
In [59]:
df = pd.DataFrame(dice).fillna(0).transpose()
df
Now we can multiply by the probabilities:
In [60]:
df *= pmf_dice.ps
df
And add up the weighted distributions:
In [61]:
df.sum(axis=1)
The argument axis=1
means we want to sum across the rows.
The result is a Pandas Series.
Putting it all together, here's a function that makes a weighted mixture of distributions.
In [62]:
def make_mixture(pmf, pmf_seq):
"""Make a mixture of distributions.
pmf: mapping from each hypothesis to its probability
pmf_seq: sequence of Pmfs, each representing
a conditional distribution for one hypothesis
returns: Pmf representing the mixture
"""
df = pd.DataFrame(pmf_seq).fillna(0).transpose()
df *= pmf.ps
total = df.sum(axis=1)
return Pmf(total)
The first parameter is a Pmf
that makes from each hypothesis to a probability.
The second parameter is a sequence of Pmf
objects, one for each hypothesis.
We can call it like this:
In [63]:
mix = make_mixture(pmf_dice, dice)
And here's what it looks like.
In [64]:
mix.bar(label='mixture', alpha=0.6)
decorate_dice('Mixture of Uniform Distributions')
savefig('fig06-04')
We have seen two representations of distributions: Pmf
and Cdf
objects.
These representations are equivalent in the sense that they contain
the same information, so you can convert from one to the other. The
primary difference between them is performance: some operations are
faster and easier with a Pmf; others are faster with a Cdf.
In this chapter we used Cdf
objects to compute distributions of maxima and minima; these distributions are useful for inference if we are given a maximum or minimum as data.
We also computed mixtures of distributions, which we will use in the next chapter to make predictions.
Exercise: When you generate a D&D character, instead of rolling dice, you can use the "standard array" of attributes, which is 15, 14, 13, 12, 10, and 8.
Do you think you are better off using the standard array or (literally) rolling the dice?
Compare the distribution of the values in the standard array to the distribution we computed for the best three out of four:
Which distribution has higher mean? Use the mean
method.
Which distribution has higher standard deviation? Use the std
method.
The lowest value in the standard array is 8. For each attribute, what is the probability of getting a value less than 8? If you roll the dice six times, what's the probability that at least one of your attributes is less than 8?
The highest value in the standard array is 15. For each attribute, what is the probability of getting a value greater than 15? If you roll the dice six times, what's the probability that at least one of your attributes is greater than 15?
To get you started, here's a Cdf
that represents the distribution of attributes in the standard array:
In [65]:
standard = [15,14,13,12,10,8]
cdf_standard = Cdf.from_seq(standard)
We can compare it to the distribution of attributes you get by rolling four dice at adding up the best three.
In [66]:
cdf_4d6.plot(label='max of 6 attributes')
cdf_standard.step(label='standard set')
decorate_dice('Distribution of attributes')
plt.ylabel('CDF');
I plotted cdf_standard
as a step function to show more clearly that it contains only a few quantities.
In [67]:
# Solution goes here
In [68]:
# Solution goes here
In [69]:
# Solution goes here
In [70]:
# Solution goes here
In [71]:
# Solution goes here
In [72]:
# Solution goes here
Exercise: Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. If I roll the same die again, what is the probability that I get another 1?
Hint: Compute the posterior distribution as we have done before and pass it as one of the arguments to make_mixture
.
In [73]:
# Solution goes here
In [74]:
# Solution goes here
In [75]:
# Solution goes here
In [76]:
# Solution goes here
Exercise: Suppose I have two boxes of dice:
One contains a 4-sided die and a 6-sided die.
The other contains a 6-sided die and an 8-sided die.
I choose a box at random, choose a die, and roll it 3 times. If I get 2, 4, and 6, which box do you think I chose?
In [77]:
# Solution goes here
In [78]:
# Solution goes here
In [79]:
# Solution goes here
In [80]:
# Solution goes here
In [81]:
# Solution goes here
In [82]:
# Solution goes here
In [83]:
# Solution goes here
Exercise: Henri Poincaré was a French mathematician who taught at the Sorbonne around 1900. The following anecdote about him is probably fabricated, but it makes an interesting probability problem.
Supposedly Poincaré suspected that his local bakery was selling loaves of bread that were lighter than the advertised weight of 1 kg, so every day for a year he bought a loaf of bread, brought it home and weighed it. At the end of the year, he plotted the distribution of his measurements and showed that it fit a normal distribution with mean 950 g and standard deviation 50 g. He brought this evidence to the bread police, who gave the baker a warning.
For the next year, Poincaré continued the practice of weighing his bread every day. At the end of the year, he found that the average weight was 1000 g, just as it should be, but again he complained to the bread police, and this time they fined the baker.
Why? Because the shape of the distribution was asymmetric. Unlike the normal distribution, it was skewed to the right, which is consistent with the hypothesis that the baker was still making 950 g loaves, but deliberately giving Poincaré the heavier ones.
To see whether this anecdote is plausible, let's suppose that when the baker sees Poincaré coming, he hefts n
loaves of bread and gives Poincaré the heaviest one. How many loaves would the baker have to heft to make the average of the maximum 1000 g?
To get you started, I'll generate a year's worth of data from a normal distribution with the given parameters.
In [84]:
mean = 950
std = 50
sample = np.random.normal(mean, std, size=365)
In [85]:
# Solution goes here
In [86]:
# Solution goes here
Exercise: Two doctors fresh out of medical school are arguing about whose hospital delivers more babies. The first doctor says, "I've been at Hospital A for two weeks, and already we've had a day when we delivered 20 babies."
The second doctor says, "I've only been at Hospital B for one week, but already there's been a 19-baby day."
Which hospital do you think delivers more babies on average? You can assume that the number of babies born in a day is well modeled by a Poisson distribution with parameter $\lambda$.
For a hypothetical value of $\lambda$, you can compute the PMF of a Poisson distribution like this:
In [87]:
from scipy.stats import poisson
def make_poisson(lam):
high = np.round(lam * 4)
qs = np.arange(0, int(high))
ps = poisson(lam).pmf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
For example, if the actual value of $\lambda$ is 8, the distribution of babies born in a single day looks like this:
In [88]:
pmf = make_poisson(8)
pmf.plot()
decorate(xlabel='Number of babies',
ylabel='PMF',
title='Distribution of babies in a single day')
The mean of this distribution is the parameter, $\lambda$.
In [89]:
pmf.mean()
And here's what the distributions look like for the maximum number of babies after one week or two weeks.
In [90]:
pmf_max = pmf.max_dist(2 * 7)
pmf_max.plot(label='two weeks')
pmf_max = pmf.max_dist(7)
pmf_max.plot(label='one week')
decorate(xlabel='Number of babies',
ylabel='PMF',
title='Distribution of maximum babies in one day')
Now you finish it off from there.
In [91]:
# Solution goes here
In [92]:
# Solution goes here
In [93]:
# Solution goes here
In [94]:
# Solution goes here
In [95]:
# Solution goes here
In [96]:
# Solution goes here
In [97]:
# Solution goes here
Exercise: This question is related to a method I developed for estimating the minimum time for a packet of data to travel through a path in the internet.
Suppose I drive the same route three times and the fastest of the three attempts takes 8 minutes.
There are two traffic lights on the route. As I approach each light, there is a 40% chance that it is green; in that case, it causes no delay. And there is a 60% change it is red; in that case it causes a delay that is uniformly distributed from 0 to 60 seconds.
What is the posterior distribution of the time it would take to drive the route with no delays?
To get you started, here is the distribution of delays if the light is red.
In [98]:
qs = np.arange(1, 61)
pmf_delay = Pmf(1, qs)
pmf_delay.normalize()
And the distribution of delays if the light is green: always 0.
In [99]:
pmf_nodelay = Pmf.from_seq([0])
pmf_nodelay
Here is the mixture of delays due to red and green lights.
In [100]:
pmf = Pmf([0.4, 0.6])
pmf_total = make_mixture(pmf, [pmf_nodelay, pmf_delay])
pmf_total
Now I suggest the following steps:
Compute the distribution for the sum of two delays.
Compute the distribution for the lowest total delay after three attempts.
Make a prior distribution with a range of possible values for the no-delay travel time.
For each hypothesis, compute the likelihood of the observed minimum travel time, 8 minutes.
Compute the posterior distribution for the no-delay travel time.
In [101]:
# Solution goes here
In [102]:
# Solution goes here
In [103]:
# Solution goes here
In [104]:
# Solution goes here
In [105]:
# Solution goes here
In [106]:
# Solution goes here
In [107]:
# Solution goes here
In [ ]: