KNOWLEDGE

The knowledge module covers Chapter 19: Knowledge in Learning from Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach.

Execute the cell below to get started.


In [1]:
from knowledge import *

from notebook import pseudocode, psource

CONTENTS

  • Overview
  • Current-Best Learning
  • Version-Space Learning

OVERVIEW

Like the learning module, this chapter focuses on methods for generating a model/hypothesis for a domain. Unlike though the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis.

First-Order Logic

Usually knowledge in this field is represented as first-order logic, a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called goal predicate, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.

Representation

In this module, we use dictionaries to represent examples, with keys the attribute names and values the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.

For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example:

{'Species': 'Cat', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}

A hypothesis can be the following:

[{'Species': 'Cat'}]

which means an animal will take an umbrella if and only if it is a cat.

Consistency

We say that an example e is consistent with an hypothesis h if the assignment from the hypothesis for e is the same as e['GOAL']. If the above example and hypothesis are e and h respectively, then e is consistent with h since e['Species'] == 'Cat'. For e = {'Species': 'Dog', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}, the example is no longer consistent with h, since the value assigned to e is False while e['GOAL'] is True.

CURRENT-BEST LEARNING

Overview

In Current-Best Learning, we start with a hypothesis and we refine it as we iterate through the examples. For each example, there are three possible outcomes. The example is consistent with the hypothesis, the example is a false positive (real value is false but got predicted as true) and false negative (real value is true but got predicted as false). Depending on the outcome we refine the hypothesis accordingly:

  • Consistent: We do not change the hypothesis and we move on to the next example.

  • False Positive: We specialize the hypothesis, which means we add a conjunction.

  • False Negative: We generalize the hypothesis, either by removing a conjunction or a disjunction, or by adding a disjunction.

When specializing and generalizing, we should take care to not create inconsistencies with previous examples. To avoid that caveat, backtracking is needed. Thankfully, there is not just one specialization or generalization, so we have a lot to choose from. We will go through all the specialization/generalizations and we will refine our hypothesis as the first specialization/generalization consistent with all the examples seen up to that point.

Pseudocode


In [2]:
pseudocode('Current-Best-Learning')


Out[2]:

AIMA3e

function Current-Best-Learning(examples, h) returns a hypothesis or fail
if examples is empty then
   return h
e ← First(examples)
if e is consistent with h then
   return Current-Best-Learning(Rest(examples), h)
else if e is a false positive for h then
   for each h' in specializations of h consistent with examples seen so far do
     h'' ← Current-Best-Learning(Rest(examples), h')
     if h''fail then return h''
else if e is a false negative for h then
   for each h' in generalizations of h consistent with examples seen so far do
     h'' ← Current-Best-Learning(Rest(examples), h')
     if h''fail then return h''
return fail


Figure ?? The current-best-hypothesis learning algorithm. It searches for a consistent hypothesis that fits all the examples and backtracks when no consistent specialization/generalization can be found. To start the algorithm, any hypothesis can be passed in; it will be specialized or generalized as needed.

Implementation

As mentioned previously, examples are dictionaries (with keys the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the NOT operation with an exclamation mark (!).

We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples.

You can read the source by running the cell below:


In [ ]:
psource(current_best_learning, specializations, generalizations)

You can view the auxiliary functions in the knowledge module. A few notes on the functionality of some of the important methods:

  • specializations: For each disjunction in the hypothesis, it adds a conjunction for values in the examples encountered so far (if the conjunction is consistent with all the examples). It returns a list of hypotheses.

  • generalizations: It adds to the list of hypotheses in three phases. First it deletes disjunctions, then it deletes conjunctions and finally it adds a disjunction.

  • add_or: Used by generalizations to add an or operation (a disjunction) to the hypothesis. Since the last example is the problematic one which wasn't consistent with the hypothesis, it will model the new disjunction to that example. It creates a disjunction for each combination of attributes in the example and returns the new hypotheses consistent with the negative examples encountered so far. We do not need to check the consistency of positive examples, since they are already consistent with at least one other disjunction in the hypotheses' set, so this new disjunction doesn't affect them. In other words, if the value of a positive example is negative under the disjunction, it doesn't matter since we know there exists a disjunction consistent with the example.

Since the algorithm stops searching the specializations/generalizations after the first consistent hypothesis is found, usually you will get different results each time you run the code.

Examples

We will take a look at two examples. The first is a trivial one, while the second is a bit more complicated (you can also find it in the book).

First we have the "animals taking umbrellas" example. Here we want to find a hypothesis to predict whether or not an animal will take an umbrella. The attributes are Species, Rain and Coat. The possible values are [Cat, Dog], [Yes, No] and [Yes, No] respectively. Below we give seven examples (with GOAL we denote whether an animal will take an umbrella or not):


In [2]:
animals_umbrellas = [
    {'Species': 'Cat', 'Rain': 'Yes', 'Coat': 'No', 'GOAL': True},
    {'Species': 'Cat', 'Rain': 'Yes', 'Coat': 'Yes', 'GOAL': True},
    {'Species': 'Dog', 'Rain': 'Yes', 'Coat': 'Yes', 'GOAL': True},
    {'Species': 'Dog', 'Rain': 'Yes', 'Coat': 'No', 'GOAL': False},
    {'Species': 'Dog', 'Rain': 'No', 'Coat': 'No', 'GOAL': False},
    {'Species': 'Cat', 'Rain': 'No', 'Coat': 'No', 'GOAL': False},
    {'Species': 'Cat', 'Rain': 'No', 'Coat': 'Yes', 'GOAL': True}
]

Let our initial hypothesis be [{'Species': 'Cat'}]. That means every cat will be taking an umbrella. We can see that this is not true, but it doesn't matter since we will refine the hypothesis using the Current-Best algorithm. First, let's see how that initial hypothesis fares to have a point of reference.


In [3]:
initial_h = [{'Species': 'Cat'}]

for e in animals_umbrellas:
    print(guess_value(e, initial_h))


True
True
False
False
False
True
True

We got 5/7 correct. Not terribly bad, but we can do better. Let's run the algorithm and see how that performs.


In [4]:
h = current_best_learning(animals_umbrellas, initial_h)

for e in animals_umbrellas:
    print(guess_value(e, h))


True
True
True
False
False
False
True

We got everything right! Let's print our hypothesis:


In [5]:
print(h)


[{'Species': 'Cat', 'Rain': '!No'}, {'Coat': 'Yes', 'Rain': 'Yes'}, {'Coat': 'Yes'}]

If an example meets any of the disjunctions in the list, it will be True, otherwise it will be False.

Let's move on to a bigger example, the "Restaurant" example from the book. The attributes for each example are the following:

  • Alternative option (Alt)
  • Bar to hang out/wait (Bar)
  • Day is Friday (Fri)
  • Is hungry (Hun)
  • How much does it cost (Price, takes values in [$, $$, $$$])
  • How many patrons are there (Pat, takes values in [None, Some, Full])
  • Is raining (Rain)
  • Has made reservation (Res)
  • Type of restaurant (Type, takes values in [French, Thai, Burger, Italian])
  • Estimated waiting time (Est, takes values in [0-10, 10-30, 30-60, >60])

We want to predict if someone will wait or not (Goal = WillWait). Below we show twelve examples found in the book.

With the function r_example we will build the dictionary examples:


In [6]:
def r_example(Alt, Bar, Fri, Hun, Pat, Price, Rain, Res, Type, Est, GOAL):
    return {'Alt': Alt, 'Bar': Bar, 'Fri': Fri, 'Hun': Hun, 'Pat': Pat,
            'Price': Price, 'Rain': Rain, 'Res': Res, 'Type': Type, 'Est': Est,
            'GOAL': GOAL}

In code:


In [7]:
restaurant = [
    r_example('Yes', 'No', 'No', 'Yes', 'Some', '$$$', 'No', 'Yes', 'French', '0-10', True),
    r_example('Yes', 'No', 'No', 'Yes', 'Full', '$', 'No', 'No', 'Thai', '30-60', False),
    r_example('No', 'Yes', 'No', 'No', 'Some', '$', 'No', 'No', 'Burger', '0-10', True),
    r_example('Yes', 'No', 'Yes', 'Yes', 'Full', '$', 'Yes', 'No', 'Thai', '10-30', True),
    r_example('Yes', 'No', 'Yes', 'No', 'Full', '$$$', 'No', 'Yes', 'French', '>60', False),
    r_example('No', 'Yes', 'No', 'Yes', 'Some', '$$', 'Yes', 'Yes', 'Italian', '0-10', True),
    r_example('No', 'Yes', 'No', 'No', 'None', '$', 'Yes', 'No', 'Burger', '0-10', False),
    r_example('No', 'No', 'No', 'Yes', 'Some', '$$', 'Yes', 'Yes', 'Thai', '0-10', True),
    r_example('No', 'Yes', 'Yes', 'No', 'Full', '$', 'Yes', 'No', 'Burger', '>60', False),
    r_example('Yes', 'Yes', 'Yes', 'Yes', 'Full', '$$$', 'No', 'Yes', 'Italian', '10-30', False),
    r_example('No', 'No', 'No', 'No', 'None', '$', 'No', 'No', 'Thai', '0-10', False),
    r_example('Yes', 'Yes', 'Yes', 'Yes', 'Full', '$', 'No', 'No', 'Burger', '30-60', True)
]

Say our initial hypothesis is that there should be an alternative option and let's run the algorithm.


In [8]:
initial_h = [{'Alt': 'Yes'}]
h = current_best_learning(restaurant, initial_h)
for e in restaurant:
    print(guess_value(e, h))


True
False
True
True
False
True
False
True
False
False
False
True

The predictions are correct. Let's see the hypothesis that accomplished that:


In [9]:
print(h)


[{'Res': '!No', 'Fri': '!Yes', 'Alt': 'Yes'}, {'Bar': 'Yes', 'Fri': 'No', 'Rain': 'No', 'Hun': 'No'}, {'Bar': 'No', 'Price': '$', 'Fri': 'Yes'}, {'Res': 'Yes', 'Price': '$$', 'Rain': 'Yes', 'Alt': 'No', 'Est': '0-10', 'Fri': 'No', 'Hun': 'Yes', 'Bar': 'Yes'}, {'Fri': 'No', 'Pat': 'Some', 'Price': '$$', 'Rain': 'Yes', 'Hun': 'Yes'}, {'Est': '30-60', 'Res': 'No', 'Price': '$', 'Fri': 'Yes', 'Hun': 'Yes'}]

It might be quite complicated, with many disjunctions if we are unlucky, but it will always be correct, as long as a correct hypothesis exists.

VERSION-SPACE LEARNING

Overview

Version-Space Learning is a general method of learning in logic based domains. We generate the set of all the possible hypotheses in the domain and then we iteratively remove hypotheses inconsistent with the examples. The set of remaining hypotheses is called version space. Because hypotheses are being removed until we end up with a set of hypotheses consistent with all the examples, the algorithm is sometimes called candidate elimination algorithm.

After we update the set on an example, all the hypotheses in the set are consistent with that example. So, when all the examples have been parsed, all the remaining hypotheses in the set are consistent with all the examples. That means we can pick hypotheses at random and we will always get a valid hypothesis.

Pseudocode


In [3]:
pseudocode('Version-Space-Learning')


Out[3]:

AIMA3e

function Version-Space-Learning(examples) returns a version space
local variables: V, the version space: the set of all hypotheses

V ← the set of all hypotheses
for each example e in examples do
   if V is not empty then V ← Version-Space-Update(V, e)
return V


function Version-Space-Update(V, e) returns an updated version space
V ← {hV : h is consistent with e}


Figure ?? The version space learning algorithm. It finds a subset of V that is consistent with all the examples.

Implementation

The set of hypotheses is represented by a list and each hypothesis is represented by a list of dictionaries, each dictionary a disjunction. For each example in the given examples we update the version space with the function version_space_update. In the end, we return the version-space.

Before we can start updating the version space, we need to generate it. We do that with the all_hypotheses function, which builds a list of all the possible hypotheses (including hypotheses with disjunctions). The function works like this: first it finds the possible values for each attribute (using values_table), then it builds all the attribute combinations (and adds them to the hypotheses set) and finally it builds the combinations of all the disjunctions (which in this case are the hypotheses build by the attribute combinations).

You can read the code for all the functions by running the cells below:


In [ ]:
psource(version_space_learning, version_space_update)

In [ ]:
psource(all_hypotheses, values_table)

In [ ]:
psource(build_attr_combinations, build_h_combinations)

Example

Since the set of all possible hypotheses is enormous and would take a long time to generate, we will come up with another, even smaller domain. We will try and predict whether we will have a party or not given the availability of pizza and soda. Let's do it:


In [8]:
party = [
    {'Pizza': 'Yes', 'Soda': 'No', 'GOAL': True},
    {'Pizza': 'Yes', 'Soda': 'Yes', 'GOAL': True},
    {'Pizza': 'No', 'Soda': 'No', 'GOAL': False}
]

Even though it is obvious that no-pizza no-party, we will run the algorithm and see what other hypotheses are valid.


In [12]:
V = version_space_learning(party)
for e in party:
    guess = False
    for h in V:
        if guess_value(e, h):
            guess = True
            break

    print(guess)


True
True
False

The results are correct for the given examples. Let's take a look at the version space:


In [17]:
print(len(V))

print(V[5])
print(V[10])

print([{'Pizza': 'Yes'}] in V)


959
[{'Pizza': 'Yes'}, {'Soda': 'Yes'}]
[{'Pizza': 'Yes'}, {'Pizza': '!No', 'Soda': 'No'}]
True

There are almost 1000 hypotheses in the set. You can see that even with just two attributes the version space in very large.

Our initial prediction is indeed in the set of hypotheses. Also, the two other random hypotheses we got are consistent with the examples (since they both include the "Pizza is available" disjunction).