Bayesian interpretation of medical tests

This notebooks explores several problems related to interpreting the results of medical tests.

Copyright 2016 Allen Downey

MIT License: http://opensource.org/licenses/MIT


In [1]:
from __future__ import print_function, division

from thinkbayes2 import Pmf, Suite

from fractions import Fraction

Medical tests

Suppose we test a patient to see if they have a disease, and the test comes back positive. What is the probability that the patient is actually sick (that is, has the disease)?

To answer this question, we need to know:

  • The prevalence of the disease in the population the patient is from. Let's assume the patient is identified as a member of a population where the known prevalence is p.

  • The sensitivity of the test, s, which is the probability of a positive test if the patient is sick.

  • The false positive rate of the test, t, which is the probability of a positive test if the patient is not sick.

Given these parameters, we can compute the probability that the patient is sick, given a positive test.

Test class

To do that, I'll define a Test class that extends Suite, so it inherits Update and provides Likelihood.

The instance variables of Test are:

  • p, s, and t: Copies of the parameters.
  • d: a dictionary that maps from hypotheses to their probabilities. The hypotheses are the strings sick and notsick.
  • likelihood: a dictionary that encodes the likelihood of the possible data values pos and neg under the hypotheses.

In [2]:
class Test(Suite):
    """Represents beliefs about a patient based on a medical test."""
    
    def __init__(self, p, s, t, label='Test'):
        # initialize the prior probabilities
        d = dict(sick=p, notsick=1-p)
        super(Test, self).__init__(d, label)
        
        # store the parameters
        self.p = p
        self.s = s
        self.t = t
        
        # make a nested dictionary to compute likelihoods
        self.likelihood = dict(pos=dict(sick=s, notsick=t),
                               neg=dict(sick=1-s, notsick=1-t))
        
    def Likelihood(self, data, hypo):
        """
        data: 'pos' or 'neg'
        hypo: 'sick' or 'notsick'
        """
        return self.likelihood[data][hypo]

Now we can create a Test object with parameters chosen for demonstration purposes (most medical tests are better than this!):


In [3]:
p = Fraction(1, 10)     # prevalence
s = Fraction(9, 10)     # sensitivity
t = Fraction(3, 10)     # false positive rate

test = Test(p, s, t)
test.Print()


notsick 9/10
sick 1/10

If you are curious, here's the nested dictionary that computes the likelihoods:


In [4]:
test.likelihood


Out[4]:
{'neg': {'notsick': Fraction(7, 10), 'sick': Fraction(1, 10)},
 'pos': {'notsick': Fraction(3, 10), 'sick': Fraction(9, 10)}}

And here's how we update the Test object with a positive outcome:


In [5]:
test.Update('pos')
test.Print()


notsick 3/4
sick 1/4

The positive test provides evidence that the patient is sick, increasing the probability from 0.1 to 0.25.

Uncertainty about t

So far, this is basic Bayesian inference. Now let's add a wrinkle. Suppose that we don't know the value of t with certainty, but we have reason to believe that t is either 0.2 or 0.4 with equal probability.

Again, we would like to know the probability that a patient who tests positive actually has the disease. As we did with the Red Die problem, we will consider several scenarios:

Scenario A: The patients are drawn at random from the relevant population, and the reason we are uncertain about t is that either (1) there are two versions of the test, with different false positive rates, and we don't know which test was used, or (2) there are two groups of people, the false positive rate is different for different groups, and we don't know which group the patient is in.

Scenario B: As in Scenario A, the patients are drawn at random from the relevant population, but the reason we are uncertain about t is that previous studies of the test have been contradictory. That is, there is only one version of the test, and we have reason to believe that t is the same for all groups, but we are not sure what the correct value of t is.

Scenario C: As in Scenario A, there are two versions of the test or two groups of people. But now the patients are being filtered so we only see the patients who tested positive and we don't know how many patients tested negative. For example, suppose you are a specialist and patients are only referred to you after they test positive.

Scenario D: As in Scenario B, we have reason to think that t is the same for all patients, and as in Scenario C, we only see patients who test positive and don't know how many tested negative.

Scenario A

We can represent this scenario with a hierarchical model, where the levels of the hierarchy are:

  • At the top level, the possible values of t and their probabilities.
  • At the bottom level, the probability that the patient is sick or not, conditioned on t.

To represent the hierarchy, I'll define a MetaTest, which is a Suite that contains Test objects with different values of t as hypotheses.


In [6]:
class MetaTest(Suite):
    """Represents a set of tests with different values of `t`."""
    
    def Likelihood(self, data, hypo):
        """
        data: 'pos' or 'neg'
        hypo: Test object
        """
        # the return value from `Update` is the total probability of the
        # data for a hypothetical value of `t`
        return hypo.Update(data)

To update a MetaTest, we update each of the hypothetical Test objects. The return value from Update is the normalizing constant, which is the total probability of the data under the hypothesis.

We use the normalizing constants from the bottom level of the hierarchy as the likelihoods at the top level.

Here's how we create the MetaTest for the scenario we described:


In [7]:
q = Fraction(1, 2)
t1 = Fraction(2, 10)
t2 = Fraction(4, 10)

test1 = Test(p, s, t1, 'Test(t=0.2)')
test2 = Test(p, s, t2, 'Test(t=0.4)')

metatest = MetaTest({test1:q, test2:1-q})
metatest.Print()


Test(t=0.2) 1/2
Test(t=0.4) 1/2

At the top level, there are two tests, with different values of t. Initially, they are equally likely.

When we update the MetaTest, it updates the embedded Test objects and then the MetaTest itself.


In [8]:
metatest.Update('pos')


Out[8]:
Fraction(9, 25)

Here are the results.


In [9]:
metatest.Print()


Test(t=0.4) 5/8
Test(t=0.2) 3/8

Because a positive test is more likely if t=0.4, the positive test is evidence in favor of the hypothesis that t=0.4.

This MetaTest object represents what we should believe about t after seeing the test, as well as what we should believe about the probability that the patient is sick.

Marginal distributions

To compute the probability that the patient is sick, we have to compute the marginal probabilities of sick and notsick, averaging over the possible values of t. The following function computes this distribution:


In [10]:
def MakeMixture(metapmf, label='mix'):
    """Make a mixture distribution.

    Args:
      metapmf: Pmf that maps from Pmfs to probs.
      label: string label for the new Pmf.

    Returns: Pmf object.
    """
    mix = Pmf(label=label)
    for pmf, p1 in metapmf.Items():
        for x, p2 in pmf.Items():
            mix.Incr(x, p1 * p2)
    return mix

Here's the posterior predictive distribution:


In [11]:
predictive = MakeMixture(metatest)
predictive.Print()


notsick 3/4
sick 1/4

After seeing the test, the probability that the patient is sick is 0.25, which is the same result we got with t=0.3.

Two patients

Now suppose you test two patients and they both test positive. What is the probability that they are both sick?

To answer that, I define a few more functions to work with Metatests:


In [12]:
def MakeMetaTest(p, s, pmf_t):
    """Makes a MetaTest object with the given parameters.
        
    p: prevalence
    s: sensitivity
    pmf_t: Pmf of possible values for `t`
    """
    tests = {}
    for t, q in pmf_t.Items():
        label = 'Test(t=%s)' % str(t)
        tests[Test(p, s, t, label)] = q
    return MetaTest(tests)

def Marginal(metatest):
    """Extracts the marginal distribution of t.
    """
    marginal = Pmf()
    for test, prob in metatest.Items():
        marginal[test.t] = prob
    return marginal

def Conditional(metatest, t):
    """Extracts the distribution of sick/notsick conditioned on t."""
    for test, prob in metatest.Items():
        if test.t == t:
            return test

MakeMetaTest makes a MetaTest object starting with a given PMF of t.

Marginal extracts the PMF of t from a MetaTest.

Conditional takes a specified value for t and returns the PMF of sick and notsick conditioned on t.

I'll test these functions using the same parameters from above:


In [13]:
pmf_t = Pmf({t1:q, t2:1-q})
metatest = MakeMetaTest(p, s, pmf_t)
metatest.Print()


Test(t=1/5) 1/2
Test(t=2/5) 1/2

Here are the results


In [14]:
metatest = MakeMetaTest(p, s, pmf_t)
metatest.Update('pos')
metatest.Print()


Test(t=2/5) 5/8
Test(t=1/5) 3/8

Same as before. Now we can extract the posterior distribution of t.


In [15]:
Marginal(metatest).Print()


1/5 3/8
2/5 5/8

Having seen one positive test, we are a little more inclined to believe that t=0.4; that is, that the false positive rate for this patient/test is high.

And we can extract the conditional distributions for the patient:


In [16]:
cond1 = Conditional(metatest, t1)
cond1.Print()


notsick 2/3
sick 1/3

In [17]:
cond2 = Conditional(metatest, t2)
cond2.Print()


notsick 4/5
sick 1/5

Finally, we can make the posterior marginal distribution of sick/notsick, which is a weighted mixture of the conditional distributions:


In [18]:
MakeMixture(metatest).Print()


notsick 3/4
sick 1/4

At this point we have a MetaTest that contains our updated information about the test (the distribution of t) and about the patient that tested positive.

Now, to compute the probability that both patients are sick, we have to know the distribution of t for both patients. And that depends on details of the scenario.

In Scenario A, the reason we are uncertain about t is either (1) there are two versions of the test, with different false positive rates, and we don't know which test was used, or (2) there are two groups of people, the false positive rate is different for different groups, and we don't know which group the patient is in.

So the value of t for each patient is an independent choice from pmf_t; that is, if we learn something about t for one patient, that tells us nothing about t for other patients.

So if we consider two patients who have tested positive, the MetaTest we just computed represents our belief about each of the two patients independently.

To compute the probability that both patients are sick, we can convolve the two distributions.


In [19]:
convolution = metatest + metatest
convolution.Print()


Pmf({'notsicksick': Fraction(4, 25), 'sicknotsick': Fraction(4, 25), 'notsicknotsick': Fraction(16, 25), 'sicksick': Fraction(1, 25)}) 25/64
Pmf({'notsicksick': Fraction(4, 15), 'sicknotsick': Fraction(2, 15), 'notsicknotsick': Fraction(8, 15), 'sicksick': Fraction(1, 15)}) 15/64
Pmf({'notsicksick': Fraction(2, 15), 'sicknotsick': Fraction(4, 15), 'notsicknotsick': Fraction(8, 15), 'sicksick': Fraction(1, 15)}) 15/64
Pmf({'notsicksick': Fraction(2, 9), 'sicknotsick': Fraction(2, 9), 'notsicknotsick': Fraction(4, 9), 'sicksick': Fraction(1, 9)}) 9/64

Then we can compute the posterior marginal distribution of sick/notsick for the two patients:


In [20]:
marginal = MakeMixture(metatest+metatest)
marginal.Print()


notsicknotsick 9/16
notsicksick 3/16
sicknotsick 3/16
sicksick 1/16

So in Scenario A the probability that both patients are sick is 1/16.

As an aside, we could have computed the marginal distributions first and then convolved them, which is computationally more efficient:


In [21]:
marginal = MakeMixture(metatest) + MakeMixture(metatest)
marginal.Print()


notsicknotsick 9/16
notsicksick 3/16
sicknotsick 3/16
sicksick 1/16

We can confirm that this result is correct by simulation. Here's a generator that generates random pairs of patients:


In [22]:
from random import random

def flip(p):
    return random() < p

def generate_pair_A(p, s, pmf_t):
    while True:
        sick1, sick2 = flip(p), flip(p)
        
        t = pmf_t.Random()
        test1 = flip(s) if sick1 else flip(t)

        t = pmf_t.Random()
        test2 = flip(s) if sick2 else flip(t)

        yield test1, test2, sick1, sick2

And here's a function that runs the simulation for a given number of iterations:


In [23]:
def run_simulation(generator, iters=100000):
    pmf_t = Pmf([0.2, 0.4])
    pair_iterator = generator(0.1, 0.9, pmf_t)

    outcomes = Pmf()
    for i in range(iters):
        test1, test2, sick1, sick2 = next(pair_iterator)
        if test1 and test2:
            outcomes[sick1, sick2] += 1

    outcomes.Normalize()
    return outcomes

In [24]:
outcomes = run_simulation(generate_pair_A)
outcomes.Print()


(False, False) 0.5703058787271746
(False, True) 0.1794437167732491
(True, False) 0.1884582787579937
(True, True) 0.06179212574158256

As we increase iters, the probablity of (True, True) converges on 1/16, which is what we got from the analysis.

Good so far!