We're going to talk about perhaps the holy grail of probabilistic inference. It's called Bayes Rule. Bayes Rule is based on Reverend Thomas Bayes, who used this principle to infer the existence of God, but in doing so, he created a new family of methods that has vastly influenced artificial intelligence and statistics.
Screenshot taken from Udacity
Answer
Screenshot taken from Udacity
Screenshot taken from Udacity
Answer
Screenshot taken from Udacity
The normalization proceeds in two steps. We just normalized these guys to keep ratio the same but make sure they add up to 1. So let's first compute the sum of these two guys.
Answer
Screenshot taken from Udacity
And now finally, we come up with the actual posterior, whereas this one over here is often called the joint probability of two events. And the posterior is obtained by dividing this guy over here with this normalizer. So let's do this over here--let's divide this guy over here by this normalizer to get my percent distribution of having cancer given that I received the positive test result.
Answer
Screenshot taken from Udacity
Let's do the same for the non-cancer version, pick the number over here to divide and divide it by this same normalizer.
Answer
Screenshot taken from Udacity
Why don't you for a second add these two numbers and give me the result?
Screenshot taken from Udacity
Screenshot taken from Udacity
Now, the same algorithm works if your test says negative. Suppose your test result says negative. You could still ask the same question:
Screenshot taken from Udacity
We begin with our prior probability, our sensitivity and our specifitivity, and I want you to begin by filling in all the missing values. So, there's the probability of no cancer, probability of negative, which is negation of positive, given C, and probability of negative-positive given not C.
Answer
Screenshot taken from Udacity
Now assume the test comes back negative, the same logic applies as before. So please give me the combined probability of cancer given the negative test result and the combined probability of being cancer-free given the negative test result.
Answer
Screenshot taken from Udacity
Let's compute the normalizer. You now remember what this was.
Answer
Screenshot taken from Udacity
Now finally tell me what is posterior probability of cancer given that we know we had a negative test result and the probability of negative cancer given there is a negative test result.
Answer
Screenshot taken from Udacity
Let me now make your life harder. Suppose our probability of a certain other kind of disease is 0.1, so 10% of the population has it. Our test in the positive case is really informative, but there's a 0.5 chance that if I'm cancer-free the test, indeed, says the same thing. So the sensitivity is high, the specificity is lower. And let's start by filling in the first 3 of them.
Answer
Screenshot taken from Udacity
What is P(C, Neg)?
Answer
Screenshot taken from Udacity
And what's the same for P(¬C, Neg).
Answer
Screenshot taken from Udacity
What is P(Neg)?
Answer
Screenshot taken from Udacity
So tell me what the final two numbers are.
Answer
Screenshot taken from Udacity
Let's now consider the case that the test result is positive, and I want you to just give me the two numbers over here and not the other ones.
Answer
Screenshot taken from Udacity
Screenshot taken from Udacity
Now, I should say if we got this, you don't find any immediate significant about statistics and probability. This is totally nontrivial, but it comes in very handy.
Answer
In this example, it gives us funny numbers.
Break down by steps
- P(at R|see R) = P(at R) x P(see R|at R) = 0.5 * 0.8 = 0.4
- P(at G|see R) = P(at G) x P(see R|at G) = 0.5 * 0.2 = 0.1
- P(at R) = P(at R|see R) + P(at G|see R) = 0.5
- P(at R|see R) = P(at R|see R)/P(at R) = 0.4/0.5 = 0.8
- P(at G|see R) = P(at G|see R)/P(at R) = 0.1/0.5 = 0.2
Screenshot taken from Udacity
If I now change some parameters--say the robot knows the probability that it's red, and therefore, the probability 1 is under the green cell as a prior.mPlease calculate once again using Bayes rule these posteriors. I have to warn you--this is a bit of a tricky case.
Answer
Screenshot taken from Udacity
To change this example even further. Let's make this over here a 0.5 and revert back to a uniform prior. Please go ahead and calculate the posterior probability.
Answer
Break down by steps
- P(at R|see R) = P(at R) x P(see R|at R) = 0.5 * 0.8 = 0.4
- P(at G|see R) = P(at G) x P(see R|at G) = 0.5 * 0.5 = 0.25
- P(at R) = P(at R|see R) + P(at G|see R) = 0.65
- P(at R|see R) = P(at R|see R)/P(at R) = 0.4/0.65 = 0.615
- P(at G|see R) = P(at G|see R)/P(at R) = 0.05/0.65 = 0.385
Screenshot taken from Udacity
Answer
Screenshot taken from Udacity
What's the joined for Cell B?
Answer
Screenshot taken from Udacity
Finally, probability of C and Red. What is that?
Answer
Screenshot taken from Udacity
What is our normalizer?
Answer
Screenshot taken from Udacity
And now we calculate the desired posterior probability for all 3 possible outcomes.
Answer
Screenshot taken from Udacity
So what have you learned?
Answer
Break down by steps
- P(home|rain) = P(home) x P(rain|home) = 0.4 * 0.01 = 0.004
- P(gone|rain) = P(gone) x P(rain|gone) = 0.6 * 0.3 = 0.18
- P(rain) = P(home|rain) + P(gone|rain) = 0.184
- P(home|rain) = P(home|rain)/P(rain) = 0.004/0.184 = 0.0217
- P(gone|rain) = P(gone|rain)/P(rain) = 0.18/0.184 = 0.978
Screenshot taken from Udacity