KNOWLEDGE

The knowledge module covers Chapter 19: Knowledge in Learning from Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach.

Execute the cell below to get started.


In [1]:
include("aimajulia.jl");

using aimajulia;

CONTENTS

  • Overview
  • Current-Best Learning
  • Version-Space Learning

OVERVIEW

This chapter focuses on methods for generating a model/hypothesis for a domain. We use prior knowledge to help us learn from new experiences and find a proper hypothesis.

First-Order Logic

Usually knowledge in this field is represented as first-order logic, a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called goal predicate, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.

Representation

In this module, we use dictionaries to represent examples, with keys the attribute names and values the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.

For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example:

Dict("Species"=> "Cat", "Coat"=> "Yes", "Rain"=> "Yes", "GOAL"=> true)

A hypothesis can be the following:

[Dict("Species"=> "Cat")]

which means an animal will take an umbrella if and only if it is a cat.

Consistency

We say that an example e is consistent with an hypothesis h if the assignment from the hypothesis for e is the same as e["GOAL"]. If the above example and hypothesis are e and h respectively, then e is consistent with h since e["Species"] == "Cat". For e = Dict("Species"=> "Dog", "Coat"=> "Yes", "Rain"=> "Yes", "GOAL"=> true), the example is no longer consistent with h, since the value assigned to e is false while e["GOAL"] is true.

CURRENT-BEST LEARNING

Overview

In Current-Best Learning, we start with a hypothesis and we refine it as we iterate through the examples. For each example, there are three possible outcomes. The example is consistent with the hypothesis, the example is a false positive (real value is false but got predicted as true) and false negative (real value is true but got predicted as false). Depending on the outcome we refine the hypothesis accordingly:

  • Consistent: We do not change the hypothesis and we move on to the next example.
  • False Positive: We specialize the hypothesis, which means we add a conjunction.
  • False Negative: We generalize the hypothesis, either by removing a conjunction or a disjunction, or by adding a disjunction.

When specializing and generalizing, we should take care to not create inconsistencies with previous examples. To avoid that caveat, backtracking is needed. Thankfully, there is not just one specialization or generalization, so we have a lot to choose from. We will go through all the specialization/generalizations and we will refine our hypothesis as the first specialization/generalization consistent with all the examples seen up to that point.

Implementation

As mentioned previously, examples are dictionaries (with keys the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the NOT operation with an exclamation mark (!).

We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples.

You can view the auxiliary functions in the knowledge module. A few notes on the functionality of some of the important methods:

  • specializations: For each disjunction in the hypothesis, it adds a conjunction for values in the examples encountered so far (if the conjunction is consistent with all the examples). It returns a list of hypotheses.
  • generalizations: It adds to the list of hypotheses in three phases. First it deletes disjunctions, then it deletes conjunctions and finally it adds a disjunction.
  • add_or: Used by generalizations to add an or operation (a disjunction) to the hypothesis. Since the last example is the problematic one which wasn't consistent with the hypothesis, it will model the new disjunction to that example. It creates a disjunction for each combination of attributes in the example and returns the new hypotheses consistent with the negative examples encountered so far. We do not need to check the consistency of positive examples, since they are already consistent with at least one other disjunction in the hypotheses' set, so this new disjunction doesn't affect them. In other words, if the value of a positive example is negative under the disjunction, it doesn't matter since we know there exists a disjunction consistent with the example.

Since the algorithm stops searching the specializations/generalizations after the first consistent hypothesis is found, usually you will get different results each time you run the code.

Example

We will take a look at two examples. The first is a trivial one, while the second is a bit more complicated (you can also find it in the book).

First we have the "animals taking umbrellas" example. Here we want to find a hypothesis to predict whether or not an animal will take an umbrella. The attributes are Species, Rain and Coat. The possible values are [Cat, Dog], [Yes, No] and [Yes, No] respectively. Below we give seven examples (with GOAL we denote whether an animal will take an umbrella or not):


In [2]:
animals_umbrellas = [
    Dict("Species"=> "Cat", "Rain"=> "Yes", "Coat"=> "No", "GOAL"=> true),
    Dict("Species"=> "Cat", "Rain"=> "Yes", "Coat"=> "Yes", "GOAL"=> true),
    Dict("Species"=> "Dog", "Rain"=> "Yes", "Coat"=> "Yes", "GOAL"=> true),
    Dict("Species"=> "Dog", "Rain"=> "Yes", "Coat"=> "No", "GOAL"=> false),
    Dict("Species"=> "Dog", "Rain"=> "No", "Coat"=> "No", "GOAL"=> false),
    Dict("Species"=> "Cat", "Rain"=> "No", "Coat"=> "No", "GOAL"=> false),
    Dict("Species"=> "Cat", "Rain"=> "No", "Coat"=> "Yes", "GOAL"=> true)
];

Let our initial hypothesis be [Dict("Species"=> "Cat")]. That means every cat will be taking an umbrella. We can see that this is not true, but it doesn't matter since we will refine the hypothesis using the Current-Best algorithm. First, let's see how that initial hypothesis fares to have a point of reference.


In [3]:
initial_h = [Dict("Species"=> "Cat")]

for e in animals_umbrellas
    println(guess_example_value(e, initial_h))
end;


true
true
false
false
false
true
true

We got 5/7 correct. Not terribly bad, but we can do better. Let's run the algorithm and see how that performs.


In [4]:
h = current_best_learning(animals_umbrellas, initial_h)

for e in animals_umbrellas
    println(guess_example_value(e, h))
end;


true
true
true
false
false
false
true

We got everything right! Let's check our hypothesis:


In [5]:
h


Out[5]:
3-element Array{Dict{String,String},1}:
 Dict("Species"=>"Cat","Rain"=>"!No")
 Dict("Rain"=>"Yes","Coat"=>"Yes")   
 Dict("Coat"=>"Yes")                 

If an example meets any of the disjunctions in the list, it will be True, otherwise it will be False.

Let's move on to a bigger example, the \"Restaurant\" example from the book. The attributes for each example are the following:

  • Alternative option (Alt)
  • Bar to hang out/wait (Bar)
  • Day is Friday (Fri)
  • Is hungry (Hun)
  • How much does it cost (Price, takes values in [\$, \$\$, \$\$\$])
  • How many patrons are there (Pat, takes values in [None, Some, Full])
  • Is raining (Rain)
  • Has made reservation (Res)
  • Type of restaurant (Type, takes values in [French, Thai, Burger, Italian])
  • Estimated waiting time (Est, takes values in [0-10, 10-30, 30-60, >60])

We want to predict if someone will wait or not (Goal = WillWait). Below we show twelve examples found in the book.

With the function r_example we will build the dictionary examples:


In [6]:
function r_example(Alt, Bar, Fri, Hun, Pat, Price, Rain, Res, Type, Est, GOAL)=>
    return Dict("Alt"=> Alt, "Bar"=> Bar, "Fri"=> Fri, "Hun"=> Hun, "Pat"=> Pat,
                "Price"=> Price, "Rain"=> Rain, "Res"=> Res, "Type"=> Type, "Est"=> Est,
                "GOAL"=> GOAL);
end;

In code:


In [7]:
restaurant = [
    r_example("Yes", "No", "No", "Yes", "Some", "\$\$\$", "No", "Yes", "French", "0-10", true),
    r_example("Yes", "No", "No", "Yes", "Full", "\$", "No", "No", "Thai", "30-60", false),
    r_example("No", "Yes", "No", "No", "Some", "\$", "No", "No", "Burger", "0-10", true),
    r_example("Yes", "No", "Yes", "Yes", "Full", "\$", "Yes", "No", "Thai", "10-30", true),
    r_example("Yes", "No", "Yes", "No", "Full", "\$\$\$", "No", "Yes", "French", ">60", false),
    r_example("No", "Yes", "No", "Yes", "Some", "\$\$", "Yes", "Yes", "Italian", "0-10", true),
    r_example("No", "Yes", "No", "No", "None", "\$", "Yes", "No", "Burger", "0-10", false),
    r_example("No", "No", "No", "Yes", "Some", "\$\$", "Yes", "Yes", "Thai", "0-10", true),
    r_example("No", "Yes", "Yes", "No", "Full", "\$", "Yes", "No", "Burger", ">60", false),
    r_example("Yes", "Yes", "Yes", "Yes", "Full", "\$\$\$", "No", "Yes", "Italian", "10-30", false),
    r_example("No", "No", "No", "No", "None", "\$", "No", "No", "Thai", "0-10", false),
    r_example("Yes", "Yes", "Yes", "Yes", "Full", "\$", "No", "No", "Burger", "30-60", true)
];

Say our initial hypothesis is that there should be an alternative option and let's run the algorithm.


In [8]:
initial_h = [Dict("Alt"=> "Yes")];
h = current_best_learning(restaurant, initial_h);
for e in restaurant
    println(guess_example_value(e, h));
end;


true
false
true
true
false
true
false
true
false
false
false
true

The predictions are correct. Let's see the hypothesis that accomplished that:


In [9]:
h


Out[9]:
6-element Array{Dict{String,String},1}:
 Dict("Pat"=>"!Full","Alt"=>"Yes")                                                                                
 Dict("Pat"=>"Some","Rain"=>"No","Hun"=>"No","Alt"=>"No","Est"=>"0-10","Type"=>"Burger","Fri"=>"No","Price"=>"\$")
 Dict("Rain"=>"Yes","Hun"=>"Yes","Bar"=>"No","Alt"=>"Yes","Price"=>"\$")                                          
 Dict("Rain"=>"Yes","Bar"=>"Yes","Est"=>"0-10","Type"=>"Italian","Price"=>"\$\$","Res"=>"Yes")                    
 Dict("Pat"=>"Some","Hun"=>"Yes","Alt"=>"No","Est"=>"0-10","Fri"=>"No","Price"=>"\$\$","Res"=>"Yes")              
 Dict("Pat"=>"Full","Alt"=>"Yes","Est"=>"30-60","Type"=>"Burger","Fri"=>"Yes","Res"=>"No")                        

It might be quite complicated, with many disjunctions if we are unlucky, but it will always be correct, as long as a correct hypothesis exists.

VERSION-SPACE LEARNING

Overview

Version-Space Learning is a general method of learning in logic based domains. We generate the set of all the possible hypotheses in the domain and then we iteratively remove hypotheses inconsistent with the examples. The set of remaining hypotheses is called version space. Because hypotheses are being removed until we end up with a set of hypotheses consistent with all the examples, the algorithm is sometimes called candidate elimination algorithm.

After we update the set on an example, all the hypotheses in the set are consistent with that example. So, when all the examples have been parsed, all the remaining hypotheses in the set are consistent with all the examples. That means we can pick hypotheses at random and we will always get a valid hypothesis.

Implementation

The set of hypotheses is represented by a list and each hypothesis is represented by a list of dictionaries, each dictionary a disjunction. For each example in the given examples we update the version space with the function version_space_update. In the end, we return the version-space.

Before we can start updating the version space, we need to generate it. We do that with the all_hypotheses function, which builds a list of all the possible hypotheses (including hypotheses with disjunctions). The function works like this: first it finds the possible values for each attribute (using values_table), then it builds all the attribute combinations (and adds them to the hypotheses set) and finally it builds the combinations of all the disjunctions (which in this case are the hypotheses build by the attribute combinations).

Example

Since the set of all possible hypotheses is enormous and would take a long time to generate, we will come up with another, even smaller domain. We will try and predict whether we will have a party or not given the availability of pizza and soda. Let's do it:


In [10]:
party = [
    Dict("Pizza"=> "Yes", "Soda"=> "No", "GOAL"=> true),
    Dict("Pizza"=> "Yes", "Soda"=> "Yes", "GOAL"=> true),
    Dict("Pizza"=> "No", "Soda"=> "No", "GOAL"=> false)
];

Even though it is obvious that no-pizza no-party, we will run the algorithm and see what other hypotheses are valid.


In [11]:
V = version_space_learning(party);
for e in party
    guess = false;
    for h in V
        if guess_example_value(e, h)
            guess = true;
            break;
        end
    end
    println(guess);
end


true
true
false

The results are correct for the given examples. Let's take a look at the version space:


In [12]:
println(size(V));
println();
println(V[6]);
println();
println(V[11]);
println();
println([Dict("Pizza"=> "Yes")] in V);


(61439,)

Any[Dict("Pizza"=>"Yes"), Dict("Pizza"=>"!No"), Dict("Soda"=>"!No"), Dict("Pizza"=>"Yes","Soda"=>"No"), Dict("Pizza"=>"Yes","Soda"=>"Yes"), Dict("Pizza"=>"Yes","Soda"=>"!No"), Dict("Pizza"=>"!No","Soda"=>"Yes"), Dict("Pizza"=>"!No","Soda"=>"!No"), Dict("Soda"=>"!No")]

Any[Dict("Pizza"=>"!No"), Dict("Soda"=>"Yes"), Dict("Soda"=>"!No"), Dict("Pizza"=>"Yes","Soda"=>"No"), Dict("Pizza"=>"Yes","Soda"=>"Yes"), Dict("Soda"=>"!No"), Dict("Soda"=>"Yes")]

true

You can see that even with just two attributes the version space in very large.

Our initial prediction is indeed in the set of hypotheses. Also, the two other random hypotheses we got are consistent with the examples (since they both include the "Pizza is available" disjunction).