PROBABILITY

This IJulia notebook acts as supporting material for Chapter 13 Quantifying Uncertainty, Chapter 14 Probabilistic Reasoning and Chapter 15 Probabilistic Reasoning over Time of the book Artificial Intelligence: A Modern Approach. This notebook makes use of the implementations in probability.jl module. Let's get started.


In [1]:
include("aimajulia.jl");

using aimajulia;

Probability Distribution

Let us begin by specifying discrete probability distributions. ProbabilityDistribution defines a discrete probability distribution. We name our random variable and then assign probabilities to the different values of the random variable. Assigning probabilities to the values works similar to that of using a dictionary with keys being the Value and we assign to it the probability.


In [2]:
p = ProbabilityDistribution(variable_name="Flip");
p['H'], p['T'] = 0.25, 0.75;
p['T']


Out[2]:
0.75

The first parameter of the constructor variable_name has a default value of '?'. So if the name is not passed it defaults to ?. The keyword argument frequencies can be a dictionary of values of random variable=>probability. These are then normalized such that the probability values sum upto 1 using the normalize method.


In [3]:
p = ProbabilityDistribution(frequencies=Dict("low"=> 125, "medium"=> 375, "high"=> 500));
p.variable_name


Out[3]:
"?"

In [4]:
(p["low"], p["medium"], p["high"])


Out[4]:
(0.125, 0.375, 0.5)

It also separately keeps track of all the values of the distribution in a list called values. Every time a new value is assigned a probability it is appended to this array.


In [5]:
p.values


Out[5]:
3-element Array{Float64,1}:
 375.0
 500.0
 125.0

The distribution by default is not normalized if values are added incremently. We can still force normalization by invoking the normalize method.


In [6]:
p = ProbabilityDistribution(variable_name="Y");
p["Cat"] = 50;
p["Dog"] = 114;
p["Mice"] = 64;
(p["Cat"], p["Dog"], p["Mice"])


Out[6]:
(50, 114, 64)

In [7]:
normalize(p);
(p["Cat"], p["Dog"], p["Mice"])


Out[7]:
(0.21929824561403508, 0.5, 0.2807017543859649)

It is also possible to display the approximate values upto decimals using the show_approximation method.


In [8]:
show_approximation(p)


Out[8]:
"Cat: 0.2193, Dog: 0.5, Mice: 0.2807"

Joint Probability Distribution

The helper function event_values returns a tuple of the values of variables in event. An event is specified by a dict where the keys are the names of variables and the corresponding values are the value of the variable. Variables are specified with a list. The ordering of the returned tuple is same as those of the variables.

Alternatively if the event is specified by a list or tuple of equal length of the variables. Then the events tuple is returned as it is.


In [9]:
event = Dict("A"=> 10, "B"=> 9, "C"=> 8);
variables = ["C", "A"];
event_values(event, variables)


Out[9]:
(8, 10)

A probability model is completely determined by the joint distribution for all of the random variables. (Section 13.3) The probability module implements these as JointProbabilityDistribution. This struct specifies a discrete probability distribute over a set of variables.

Values for a Joint Distribution is a an ordered tuple in which each item corresponds to the value associate with a particular variable. For Joint Distribution of X, Y where X, Y take integer values this can be something like (18, 19).

To specify a Joint distribution we first need an ordered list of variables.


In [10]:
variables = ["X", "Y"];
j = JointProbabilityDistribution(variables)


Out[10]:
aimajulia.JointProbabilityDistribution(String["X", "Y"], Dict{Any,Any}(), Dict{Any,AbstractArray{T,1} where T}())

In [11]:
setindex!(j, 0.2, (1,1));
setindex!(j, 0.5, Dict("X"=> 0, "Y"=> 1));

(getindex(j, (1,1)), getindex(j, Dict("X"=> 0, "Y"=> 1)))


Out[11]:
(0.2, 0.5)

It is also possible to list all the values for a particular variable using the values method.


In [12]:
j.values["X"]


Out[12]:
2-element Array{Int64,1}:
 1
 0

Inference Using Full Joint Distributions

In this section we use Full Joint Distributions to calculate the posterior distribution given some evidence. We represent evidence by using a dictionary with variables as dict keys and dict values representing the values.

This is illustrated in Section 13.3 of the book. The functions enumerate_joint and enumerate_joint_ask implement this functionality. Under the hood they implement Equation 13.9 from the book.

$$\textbf{P}(X | \textbf{e}) = α \textbf{P}(X, \textbf{e}) = α \sum_{y} \textbf{P}(X, \textbf{e}, \textbf{y})$$

Here α is the normalizing factor. X is our query variable and e is the evidence. According to the equation we enumerate on the remaining variables y (not in evidence or query variable) i.e. all possible combinations of y.

We will be using the same example as the book. Let us create the full joint distribution from Figure 13.3.


In [13]:
full_joint = JointProbabilityDistribution(["Cavity", "Toothache", "Catch"]);
full_joint[Dict("Cavity"=> true, "Toothache"=> true, "Catch"=> true)] = 0.108;
full_joint[Dict("Cavity"=> true, "Toothache"=> true, "Catch"=> false)] = 0.012;
full_joint[Dict("Cavity"=> true, "Toothache"=> false, "Catch"=> true)] = 0.016;
full_joint[Dict("Cavity"=> true, "Toothache"=> false, "Catch"=> false)] = 0.064;
full_joint[Dict("Cavity"=> false, "Toothache"=> true, "Catch"=> true)] = 0.072;
full_joint[Dict("Cavity"=> false, "Toothache"=> false, "Catch"=> true)] = 0.144;
full_joint[Dict("Cavity"=> false, "Toothache"=> true, "Catch"=> false)] = 0.008;
full_joint[Dict("Cavity"=> false, "Toothache"=> false, "Catch"=> false)] = 0.576;

Let us now look at the enumerate_joint function returns the sum of those entries in P consistent with e,provided variables is P's remaining variables (the ones not in e). Here, P refers to the full joint distribution. The function uses a recursive call in its implementation. The first parameter variables refers to remaining variables. The function in each recursive call keeps on variable constant while varying others.

Let us assume we want to find P(Toothache=True). This can be obtained by marginalization (Equation 13.6). We can use enumerate_joint to solve for this by taking Toothache=True as our evidence. enumerate_joint will return the sum of probabilities consistent with evidence i.e. Marginal Probability.


In [14]:
evidence = Dict("Toothache"=> true);
variables = ["Cavity", "Catch"]; # variables not part of evidence
ans1 = enumerate_joint(variables, evidence, full_joint);
ans1


Out[14]:
0.19999999999999998

You can verify the result from our definition of the full joint distribution. We can use the same function to find more complex probabilities like P(Cavity=True and Toothache=True).


In [15]:
evidence = Dict("Cavity"=> true, "Toothache"=> true);
variables = ["Catch"]; # variables not part of evidence
ans2 = enumerate_joint(variables, evidence, full_joint);
ans2


Out[15]:
0.12

Being able to find sum of probabilities satisfying given evidence allows us to compute conditional probabilities like P(Cavity=True | Toothache=True) as we can rewrite this as $$P(Cavity=True | Toothache = True) = \frac{P(Cavity=True\&Toothache=True)}{P(Toothache=True)}$$ We have already calculated both the numerator and denominator.


In [16]:
ans2 / ans1


Out[16]:
0.6

We might be interested in the probability distribution of a particular variable conditioned on some evidence. This can involve doing calculations like above for each possible value of the variable. This has been implemented slightly differently using normalization in the function enumerate_joint_ask which returns a probability distribution over the values of the variable X, given the Dict(var=>val) observations e, in the JointProbabilityDistribution P. The implementation of this function calls enumerate_joint for each value of the query variable and passes extended evidence with the new evidence having X = xi. This is followed by normalization of the obtained distribution. Let us find P(Cavity | Toothache=True) using enumerate_joint_ask.


In [17]:
query_variable = "Cavity";
evidence = Dict("Toothache"=> true);
answer = enumerate_joint_ask(query_variable, evidence, full_joint);
(answer[true], answer[false])


Out[17]:
(0.6, 0.39999999999999997)

You can verify that the first value is the same as we obtained earlier by manual calculation.

Bayesian Networks

A Bayesian network is a representation of the joint probability distribution encoding a collection of conditional independence statements.

A Bayes Network is implemented as the struct BayesianNetwork. It consisits of a collection of nodes implemented by the struct BayesianNetworkNode. The implementation in the above mentioned structs focuses only on boolean variables. Each node is associated with a variable and it contains a conditional probabilty table (cpt). The cpt represents the probability distribution of the variable conditioned on its parents P(X | parents).

Let us dive into the BayesianNetworkNode implementation. The struct takes in the name of variable, parents and cpt. Here variable is a the name of the variable like 'Earthquake'. parents should a list or space separate string with variable names of parents. The conditional probability table is a dict {(v1, v2, ...): p, ...}, the distribution P(X=true | parent1=v1, parent2=v2, ...) = p. Here the keys are combination of boolean values that the parents take. The length and order of the values in keys should be same as the supplied parent list/string. In all cases the probability of X being false is left implicit, since it follows from P(X=true).

The example below where we implement the network shown in Figure 14.3 of the book will make this more clear.

The alarm node can be made as follows:


In [18]:
alarm_node = BayesianNetworkNode("Alarm", ["Burglary", "Earthquake"], 
                                 Dict((true, true)=> 0.95,
                                      (true, false)=> 0.94,
                                      (false, true)=> 0.29,
                                      (false, false)=> 0.001));

It is possible to avoid using a tuple when there is only a single parent. So an alternative format for the cpt is


In [19]:
john_node = BayesianNetworkNode("JohnCalls", ["Alarm"], Dict(true=> 0.90, false=> 0.05));
mary_node = BayesianNetworkNode("MaryCalls", "Alarm", Dict((true, )=> 0.70, (false, )=> 0.01));

The general format used for the alarm node always holds. For nodes with no parents we can also use.


In [20]:
burglary_node = BayesianNetworkNode("Burglary", "", 0.001);
earthquake_node = BayesianNetworkNode("Earthquake", "", 0.002);

It is possible to use the node for lookup function using the p method. The method takes in three arguments BayesianNetworkNode, value and event. Event must be a dict of the type Dict(variable=>values, .. ) The value corresponds to the value of the variable we are interested in (false or true). The method returns the conditional probability P(X=value | parents=parent_values), where parent_values are the values of parents in event. (event must assign each parent a value.)


In [21]:
probability(john_node, false, Dict("Alarm"=> true, "Burglary"=> true)) # P(JohnCalls=False | Alarm=True)


Out[21]:
0.09999999999999998

With all the information about nodes present it is possible to construct a Bayes Network using BayesianNetwork. The BayesianNetwork class does not take in nodes as input but instead takes a list of node_specs. An entry in node_specs is a tuple of the parameters we use to construct a BayesianNetworkNode namely (X, parents, cpt). node_specs must be ordered with parents before children.


In [22]:
burglary_network


Out[22]:
aimajulia.BayesianNetwork(Any["Burglary", "Earthquake", "Alarm", "JohnCalls", "MaryCalls"], aimajulia.BayesianNetworkNode[aimajulia.BayesianNetworkNode("Burglary", String[], Dict(()=>0.001), Any[aimajulia.BayesianNetworkNode("Alarm", String["Burglary", "Earthquake"], Dict((false, false)=>0.001,(true, false)=>0.94,(false, true)=>0.29,(true, true)=>0.95), Any[aimajulia.BayesianNetworkNode("JohnCalls", String["Alarm"], Dict((false,)=>0.05,(true,)=>0.9), Any[]), aimajulia.BayesianNetworkNode("MaryCalls", String["Alarm"], Dict((false,)=>0.01,(true,)=>0.7), Any[])])]), aimajulia.BayesianNetworkNode("Earthquake", String[], Dict(()=>0.002), Any[aimajulia.BayesianNetworkNode("Alarm", String["Burglary", "Earthquake"], Dict((false, false)=>0.001,(true, false)=>0.94,(false, true)=>0.29,(true, true)=>0.95), Any[aimajulia.BayesianNetworkNode("JohnCalls", String["Alarm"], Dict((false,)=>0.05,(true,)=>0.9), Any[]), aimajulia.BayesianNetworkNode("MaryCalls", String["Alarm"], Dict((false,)=>0.01,(true,)=>0.7), Any[])])]), aimajulia.BayesianNetworkNode("Alarm", String["Burglary", "Earthquake"], Dict((false, false)=>0.001,(true, false)=>0.94,(false, true)=>0.29,(true, true)=>0.95), Any[aimajulia.BayesianNetworkNode("JohnCalls", String["Alarm"], Dict((false,)=>0.05,(true,)=>0.9), Any[]), aimajulia.BayesianNetworkNode("MaryCalls", String["Alarm"], Dict((false,)=>0.01,(true,)=>0.7), Any[])]), aimajulia.BayesianNetworkNode("JohnCalls", String["Alarm"], Dict((false,)=>0.05,(true,)=>0.9), Any[]), aimajulia.BayesianNetworkNode("MaryCalls", String["Alarm"], Dict((false,)=>0.01,(true,)=>0.7), Any[])])

BayesNetwork method variable_node allows to reach BayesNetworkNode instances inside a Bayes Net. It is possible to modify the cpt of the nodes directly using this method.


In [23]:
typeof(variable_node(burglary_network, "Alarm"))


Out[23]:
aimajulia.BayesianNetworkNode

In [24]:
variable_node(burglary_network, "Alarm").cpt


Out[24]:
Dict{Tuple{Bool,Bool},Float64} with 4 entries:
  (false, false) => 0.001
  (true, false)  => 0.94
  (false, true)  => 0.29
  (true, true)   => 0.95

Exact Inference in Bayesian Networks

A Bayes Network is a more compact representation of the full joint distribution and like full joint distributions allows us to do inference i.e. answer questions about probability distributions of random variables given some evidence

Exact algorithms don't scale well for larger networks. Approximate algorithms are explained in the next section.

Inference by Enumeration

We apply techniques similar to those used for enumerate_joint_ask and enumerate_joint to draw inference from Bayesian Networks. enumeration_ask and enumerate_all implement the algorithm described in Figure 14.9 of the book.

$\textbf{P}(X | \textbf{e}) = α \textbf{P}(X, \textbf{e}) = α \sum_{y} \textbf{P}(X, \textbf{e}, \textbf{y})$

enumeration_ask calls enumerate_all on each value of query variable X and finally normalizes them.

Let us solve the problem of finding out P(Burglary=True | JohnCalls=True, MaryCalls=True) using the burglary network.enumeration_ask takes three arguments X = variable name, e = Evidence (in form a dict like previously explained), bn = The Bayes Net to do inference on."


In [25]:
ans_dist = enumeration_ask("Burglary", Dict("JohnCalls"=> true, "MaryCalls"=> true), burglary_network);
ans_dist[true]


Out[25]:
0.2841718353643929

Variable Elimination

The enumeration algorithm can be improved substantially by eliminating repeated calculations. In enumeration we join the joint of all hidden variables. This is of exponential size for the number of hidden variables. Variable elimination employes interleaving join and marginalization.

Before we look into the implementation of Variable Elimination we must first familiarize ourselves with Factors.

In general we call a multidimensional array of type P(Y1 ... Yn | X1 ... Xm) a factor where some of Xs and Ys maybe assigned values. Factors are implemented in the probability module. They take as input variables and cpt.

Helper Functions

There are certain helper functions that help creating the cpt for the Factor given the evidence. Let us explore them one by one.

make_factor is used to create the cpt and variables that will be passed Factor. We use make_factor for each variable. It takes in the arguments var the particular variable, e the evidence we want to do inference on, bn the bayes network.

Here variables for each node refers to a list consisting of the variable itself and the parents minus any variables that are part of the evidence.

The cpt created is the one similar to the original cpt of the node with only rows that agree with the evidence.

We can try this out using the example on Page 524 of the book. We will make f5(A) = P(m | A)


In [26]:
f5 = make_factor("MaryCalls", Dict("JohnCalls"=> true, "MaryCalls"=> true), burglary_network)


Out[26]:
aimajulia.Factor(String["Alarm"], Dict((false,)=>0.01,(true,)=>0.7))

In [27]:
f5.cpt


Out[27]:
Dict{Tuple{Bool},Float64} with 2 entries:
  (false,) => 0.01
  (true,)  => 0.7

In [28]:
f5.variables


Out[28]:
1-element Array{String,1}:
 "Alarm"

Here f5.cpt False key gives probability for P(MaryCalls=True | Alarm = False). Due to our representation where we only store probabilities for only in cases where the node variable is True this is the same as the cpt of the BayesNode. Let us try a somewhat different example from the book where evidence is that the Alarm = True


In [29]:
new_factor = make_factor("MaryCalls", Dict("Alarm"=> true), burglary_network);
new_factor.cpt


Out[29]:
Dict{Tuple{Bool},Float64} with 2 entries:
  (false,) => 0.3
  (true,)  => 0.7

Here the cpt is for P(MaryCalls | Alarm = True). Therefore the probabilities for True and False sum up to one. Note the difference between both the cases. Again the only rows included are those consistent with the evidence.

Operations on Factors

We are interested in two kinds of operations on factors. Pointwise Product which is used to created joint distributions and Summing Out which is used for marginalization.

Factor.pointwise_product implements a method of creating a joint via combining two factors. We take the union of variables of both the factors and then generate the cpt for the new factor using all_events function. Note that the given we have eliminated rows that are not consistent with the evidence. Pointwise product assigns new probabilities by multiplying rows similar to that in a database join."

pointwise_product extends this operation to more than two operands where it is done sequentially in pairs of two.

sum_out makes a factor eliminating a variable by summing over its values. Again events_all is used to generate combinations for the rest of the variables.

Elimination Ask

The algorithm described in Figure 14.11 of the book is implemented by the function elimination_ask. We use this for inference. The key idea is that we eliminate the hidden variables by interleaving joining and marginalization. It takes in 3 arguments X the query variable, e the evidence variable and bn the Bayes network.

The algorithm creates factors out of Bayesian Nodes in reverse order and eliminates hidden variables using sum_out. Finally it takes a point wise product of all factors and normalizes. Let us finally solve the problem of inferring.

P(Burglary=True | JohnCalls=True, MaryCalls=True) using variable elimination.


In [30]:
show_approximation(elimination_ask("Burglary", Dict("JohnCalls"=> true, "MaryCalls"=> true), burglary_network))


Out[30]:
"false: 0.7158, true: 0.2842"

Approximate Inference in Bayesian Networks

Exact inference fails to scale for very large and complex Bayesian Networks. This section covers implementation of randomized sampling algorithms, also called Monte Carlo algorithms.

Prior Sampling

The idea of Prior Sampling is to sample from the Bayesian Network in a topological order. We start at the top of the network and sample as per P(Xi | parents(Xi) i.e. the probability distribution from which the value is sampled is conditioned on the values already assigned to the variable's parents. This can be thought of as a simulation.

We store the samples on the observations. Let us find P(Rain=True)


In [31]:
N = 1000;
all_observations = [prior_sample(sprinkler_network) for x in 1:N];

Now we filter to get the observations where Rain = True


In [32]:
rain_true = [observation for observation in all_observations if observation["Rain"] == true];

Finally, we can find P(Rain=True)


In [33]:
answer = size(rain_true)[1] / N;
println(answer);


0.544

To evaluate a conditional distribution. We can use a two-step filtering process. We first separate out the variables that are consistent with the evidence. Then for each value of query variable, we can find probabilities. For example to find P(Cloudy=True | Rain=True). We have already filtered out the values consistent with our evidence in rain_true. Now we apply a second filtering step on rain_true to find P(Rain=True and Cloudy=True)


In [34]:
rain_and_cloudy = [observation for observation in rain_true if observation["Cloudy"] == true];
answer = size(rain_and_cloudy)[1] / size(rain_true)[1];
println(answer);


0.8308823529411765

Rejection Sampling

Rejection Sampling is based on an idea similar to what we did just now. First, it generates samples from the prior distribution specified by the network. Then, it rejects all those that do not match the evidence. The function rejection_sampling implements the algorithm described by Figure 14.14

The function keeps counts of each of the possible values of the Query variable and increases the count when we see an observation consistent with the evidence. It takes in input parameters X - The Query Variable, e - evidence, bn - Bayes net and N - number of prior samples to generate.

To answer P(Cloudy=True | Rain=True)


In [35]:
p = rejection_sampling("Cloudy", Dict("Rain"=> true), sprinkler_network, 1000);
p[true]


Out[35]:
0.8365019011406845

Likelihood Weighting

Rejection sampling tends to reject a lot of samples if our evidence consists of a large number of variables. Likelihood Weighting solves this by fixing the evidence (i.e. not sampling it) and then using weights to make sure that our overall sampling is still consistent.

weighted_sample samples an event from Bayesian Network that's consistent with the evidence e and returns the event and its weight, the likelihood that the event accords to the evidence. It takes in two parameters bn the Bayesian Network and e the evidence.

The weight is obtained by multiplying P(xi | parents(xi)) for each node in evidence. We set the values of event = evidence at the start of the function.


In [36]:
weighted_sample(sprinkler_network, Dict("Rain"=> true))


Out[36]:
(Dict("Cloudy"=>true,"Rain"=>true,"WetGrass"=>true,"Sprinkler"=>false), 0.8)

likelihood_weighting implements the algorithm to solve our inference problem. The code is similar to rejection_sampling but instead of adding one for each sample we add the weight obtained from weighted_sampling.


In [37]:
show_approximation(likelihood_weighting("Cloudy", Dict("Rain"=> true), sprinkler_network, 200))


Out[37]:
"false: 0.1875, true: 0.8125"