Imagine that you are at a casino, and you find an unbalanced roulette wheel that looks like the following:

[Image in Blog Post]

You notice a sign next to the wheel that mentions the price per spin is 50 cents. Instead of selecting a colored area and receiving a payout if you made the correct choice, this game always give a payout, based on which area the ball falls upon. Each of the colored areas will give the following payout:

  1. 1 dollar
  2. 30 cents
  3. 50 cents
  4. 20 cents

As you can see, if the ball lands on area 2 or 4, you'll receive less money than you started with. If it lands on area 3, you'll get your money back, and if it lands on 1, you'll receive twice the money you started with.

Luckily, you happen to have $500 to spend on this game, and you're willing to try your luck for 1000 spins. You're curious what the expected payoff of the game will be. It's clear that each area has a different probability of being selected each time, but the sign didn't mention exactly what the probabilities are. Sure, you could make some guesses about the probability based on the size of each area, but where's the fun in that?

Let's run this problem as a Monte Carlo simulation:


In [1]:
import random
from numba import jit

# Monte Carlo simulation function. This is defined as
# a function so the numba library can be used to speed
# up execution. Otherwise, this would run much slower.
# p1 is the probability of the first area, and s1 is the
# score of the first area, and so on. The probabilities
# are cumulative.
@jit
def MCHist(n_hist, p1, s1, p2, s2, p3, s3, p4, s4):
    money = 0
    for n in range(1, n_hist):
        x = random.random()       
        if x <= p1:
            money += s1
        elif x <= (p1 + p2):
            money += s2
        elif x <= (p1 + p2 + p3):
            money += s3
        elif x <= (p1 + p2 + p3 + p4):
            money += s4
    return money

# Run the simulation, iterating over each number of 
# histories in the num_hists array. Don't cheat and look 
# at these probabilities!! "You" don't know them yet.
num_hist = 1e3 # $500
results = MCHist(num_hist, 0.05, 1, 0.3, 0.3, 0.15, 0.5, 0.5, 0.2) 
payout = round(results / num_hist, 3)
print('Expected payout per spin is ${}'.format(payout))


Expected payout per spin is $0.327

After spending a significant amount of time spinning the wheel, you feel a little unsatisfied. Sure, you found the expected payout, but there's a nagging feeling that your answer might have been much more accurate if you just had more money to spend. In fact, you're not even completely sure that last two digits are correct (see the N^(-1/2) rule from the previous blog post).

Let's see what the answer would have been if you instead had $50 million to spend:


In [2]:
num_hist2 = 1e8 # $50 million
results2 = MCHist(num_hist2, 0.05, 1, 0.3, 0.3, 0.15, 0.5, 0.5, 0.2) 
payout2 = round(results2 / num_hist2, 3)
print('Expected payout per spin is ${}'.format(payout2))


Expected payout per spin is $0.315

Much more satisfying. You're now fairly confident of the answer within about a hundredths of a cent. Unfortunately, you don't actually have $50 million to spend (and you certainly don't have the time for 100 million spins). Is there a way to increase the accuracy (i.e. reduce the variance) of the simulation without spending more money (i.e. requiring more histories)?

Fortunately, there's a number of methods in Monte Carlo simulation called "variance reduction" methods which can be used for this purpose. In this notebook, we will be examining a method called "weight correction". Essentially, instead of choosing values from the original probability distribution, we will choose them from a new probability distribution that we specify.

This is similar to the idea of "class balancing" in classification. We can choose a new probability distribution in such a way as to increase the likelihood that the (originally) less likely choices are selected. This will effectively decrease the variance of the final result. Once we choose a new probability distribution, we must adjust the weight of each choice based on the following formula, so the results will be unbiased:

weight_correction = p(x) / p'(x)

where p(x) is the original probability distribution and p'(x) is the new probability distribution. This formula is often expressed using the mnemonic "shoulda over did". Be careful that every value of x is still possible in the new probability distribution.

Let's examine this in practice. What if we choose a probability distribution such that each of the areas in the roulette wheel are equally likely to be chosen? Assume that the roulette operator is nice and allows you to select the results of the roulette so this situation could actually occur in real life. After all, no matter what, the house still wins, right?

The operator must also tell you the probabilities for this method to work. Note that you can calculate the analytic expected value now:

(0.05)(1) + (0.3)(0.3) + (0.15)(0.5) + (0.5)(0.2) = 0.315

But you're still curious. Will your "weight correction" method work?

For each of the areas, we can find the weight by dividing P_Orig by P_New:

Area P_Orig Score_Orig P_New Weight Score_New
1 0.05 1.0 0.25 0.2 0.2
2 0.30 0.3 0.25 1.2 0.36
3 0.15 0.5 0.25 0.6 0.3
4 0.50 0.2 0.25 2 0.4

Let's rerun the Monte Carlo analysis, using the new probability distribution and the new scoring system, but still using only $500 (1000 spins):


In [3]:
num_hist3 = 1e3 # $500
results3 = MCHist(num_hist3, 0.25, 0.2, 0.25, 0.36, 0.25, 0.3, 0.25, 0.4) 
payout3 = round(results3 / num_hist3, 5)
print('Expected payout per spin is ${}'.format(payout3))


Expected payout per spin is $0.31794

Based on the simulation above, we got an answer that was closer to the "true" value of 0.315, even though we used the same number of spins (1000). (Since Monte Carlo simulation is stochastic, it's possible that if you re-run this notebook, you might get an answer that's farther away from the "true" value, but the weight correction method will more consistently give an answer that's closer to the true value)

However, the results aren't perfect. Can we improve the selection of P_New? It turns out that we can, if we select each P_New according to the following formula:

P_New = (P_Orig)(Score_Orig) / (TotalP_New)

Where TotalP_New is the sum of all (P_Orig)(Score_Orig) for all choices.

Here's the table for our roulette example:

Area P_Orig Score_Orig P_New Weight Score_New
1 0.05 1.0 0.050/0.315=0.159 0.315 0.315
2 0.30 0.3 0.090/0.315=0.286 1.049 0.315
3 0.15 0.5 0.075/0.315=0.238 0.630 0.315
4 0.50 0.2 0.100/0.315=0.317 1.577 0.315

What a strange coincidence. Our scores are now all equal to the same value, which just happens to be the analytical solution that we calculated earlier.

As you might expect, if we run the Monte Carlo simulation again, no matter which area is chosen, we get the same score, so the only possible result is 0.315:


In [4]:
num_hist4 = 1e3 # $500
results4 = MCHist(num_hist4, 0.159, 0.315, 0.286, 0.315, 0.238, 0.315, 0.317, 0.315) 
payout4 = round(results4 / num_hist4, 3)
print('Expected payout per spin is ${}'.format(payout4))


Expected payout per spin is $0.315

For this particular example, weight correction may not seem useful. After all, you could solve the problem analytically, right? However, Monte Carlo simulation is often used for problems that cannot be solved analytically (we'll examine one of those situations in the following blog post). For those type of problems, weight correction is often a useful tool.