KNOWLEDGE

The knowledge module covers Chapter 19: Knowledge in Learning from Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach.

Execute the cell below to get started.


In [50]:
from knowledge import *

from notebook import pseudocode, psource

CONTENTS

  • Overview
  • Version-Space Learning

OVERVIEW

Like the learning module, this chapter focuses on methods for generating a model/hypothesis for a domain. Unlike though the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis.

First-Order Logic

Usually knowledge in this field is represented as first-order logic, a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called goal predicate, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.

Representation

In this module, we use dictionaries to represent examples, with keys the attribute names and values the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.

For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example:

{'Species': 'Cat', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}

A hypothesis can be the following:

[{'Species': 'Cat'}]

which means an animal will take an umbrella if and only if it is a cat.

Consistency

We say that an example e is consistent with an hypothesis h if the assignment from the hypothesis for e is the same as e['GOAL']. If the above example and hypothesis are e and h respectively, then e is consistent with h since e['Species'] == 'Cat'. For e = {'Species': 'Dog', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}, the example is no longer consistent with h, since the value assigned to e is False while e['GOAL'] is True.

VERSION-SPACE LEARNING

Overview

Version-Space Learning is a general method of learning in logic based domains. We generate the set of all the possible hypotheses in the domain and then we iteratively remove hypotheses inconsistent with the examples. The set of remaining hypotheses is called version space. Because hypotheses are being removed until we end up with a set of hypotheses consistent with all the examples, the algorithm is sometimes called candidate elimination algorithm.

After we update the set on an example, all the hypotheses in the set are consistent with that example. So, when all the examples have been parsed, all the remaining hypotheses in the set are consistent with all the examples. That means we can pick hypotheses at random and we will always get a valid hypothesis.

Pseudocode


In [32]:
pseudocode('Version-Space-Learning')


Out[32]:

AIMA3e

function Version-Space-Learning(examples) returns a version space
local variables: V, the version space: the set of all hypotheses

V ← the set of all hypotheses
for each example e in examples do
   if V is not empty then V ← Version-Space-Update(V, e)
return V


function Version-Space-Update(V, e) returns an updated version space
V ← {hV : h is consistent with e}


Figure ?? The version space learning algorithm. It finds a subset of V that is consistent with all the examples.

Implementation

The set of hypotheses is represented by a list and each hypothesis is represented by a list of dictionaries, each dictionary a disjunction. For each example in the given examples we update the version space with the function version_space_update. In the end, we return the version-space.

Before we can start updating the version space, we need to generate it. We do that with the all_hypotheses function, which builds a list of all the possible hypotheses (including hypotheses with disjunctions). The function works like this: first it finds the possible values for each attribute (using values_table), then it builds all the attribute combinations (and adds them to the hypotheses set) and finally it builds the combinations of all the disjunctions (which in this case are the hypotheses build by the attribute combinations).

You can read the code for all the functions by running the cells below:


In [33]:
psource(version_space_learning, version_space_update)


def version_space_learning(examples):
    """ [Figure 19.3]
    The version space is a list of hypotheses, which in turn are a list
    of dictionaries/disjunctions."""
    V = all_hypotheses(examples)
    for e in examples:
        if V:
            V = version_space_update(V, e)

    return V


def version_space_update(V, e):
    return [h for h in V if is_consistent(e, h)]

In [34]:
psource(all_hypotheses, values_table)


def all_hypotheses(examples):
    """Build a list of all the possible hypotheses"""
    values = values_table(examples)
    h_powerset = powerset(values.keys())
    hypotheses = []
    for s in h_powerset:
        hypotheses.extend(build_attr_combinations(s, values))

    hypotheses.extend(build_h_combinations(hypotheses))

    return hypotheses


def values_table(examples):
    """Build a table with all the possible values for each attribute.
    Returns a dictionary with keys the attribute names and values a list
    with the possible values for the corresponding attribute."""
    values = defaultdict(lambda: [])
    for e in examples:
        for k, v in e.items():
            if k == 'GOAL':
                continue

            mod = '!'
            if e['GOAL']:
                mod = ''

            if mod + v not in values[k]:
                values[k].append(mod + v)

    values = dict(values)
    return values

In [35]:
psource(build_attr_combinations, build_h_combinations)


def build_attr_combinations(s, values):
    """Given a set of attributes, builds all the combinations of values.
    If the set holds more than one attribute, recursively builds the
    combinations."""
    if len(s) == 1:
        # s holds just one attribute, return its list of values
        k = values[s[0]]
        h = [[{s[0]: v}] for v in values[s[0]]]
        return h

    h = []
    for i, a in enumerate(s):
        rest = build_attr_combinations(s[i+1:], values)
        for v in values[a]:
            o = {a: v}
            for r in rest:
                t = o.copy()
                for d in r:
                    t.update(d)
                h.append([t])

    return h


def build_h_combinations(hypotheses):
    """Given a set of hypotheses, builds and returns all the combinations of the
    hypotheses."""
    h = []
    h_powerset = powerset(range(len(hypotheses)))

    for s in h_powerset:
        t = []
        for i in s:
            t.extend(hypotheses[i])
        h.append(t)

    return h

Example

Since the set of all possible hypotheses is enormous and would take a long time to generate, we will come up with another, even smaller domain. We will try and predict whether we will have a party or not given the availability of pizza and soda. Let's do it:


In [36]:
party = [
    {'Pizza': 'Yes', 'Soda': 'No', 'GOAL': True},
    {'Pizza': 'Yes', 'Soda': 'Yes', 'GOAL': True},
    {'Pizza': 'No', 'Soda': 'No', 'GOAL': False}
]

Even though it is obvious that no-pizza no-party, we will run the algorithm and see what other hypotheses are valid.


In [37]:
V = version_space_learning(party)
for e in party:
    guess = False
    for h in V:
        if guess_value(e, h):
            guess = True
            break

    print(guess)


True
True
False

The results are correct for the given examples. Let's take a look at the version space:


In [38]:
print(len(V))

print(V[5])
print(V[10])

print([{'Pizza': 'Yes'}] in V)


959
[{'Pizza': 'Yes'}, {'Soda': 'Yes'}]
[{'Pizza': 'Yes'}, {'Pizza': '!No', 'Soda': 'No'}]
True

There are almost 1000 hypotheses in the set. You can see that even with just two attributes the version space in very large.

Our initial prediction is indeed in the set of hypotheses. Also, the two other random hypotheses we got are consistent with the examples (since they both include the "Pizza is available" disjunction).

Minimal Consistent Determination

This algorithm is based on a straightforward attempt to find the simplest determination consistent with the observations. A determinaton P > Q says that if any examples match on P, then they must also match on Q. A determination is therefore consistent with a set of examples if every pair that matches on the predicates on the left-hand side also matches on the goal predicate.

Pseudocode

Lets look at the pseudocode for this algorithm


In [47]:
pseudocode('Minimal-Consistent-Det')


Out[47]:

AIMA3e

function Minimal-Consistent-Det(E, A) returns a set of attributes
inputs: E, a set of examples
     A, a set of attributes, of size n

for i = 0 to n do
   for each subset Ai of A of size i do
     if Consistent-Det?(Ai, E) then return Ai


function Consistent-Det?(A, E) returns a truth value
inputs: A, a set of attributes
     E, a set of examples
local variables: H, a hash table

for each example e in E do
   if some example in H has the same values as e for the attributes A
    but a different classification then return false
   store the class of e inH, indexed by the values for attributes A of the example e
return true


Figure ?? An algorithm for finding a minimal consistent determination.

You can read the code for the above algorithm by running the cells below:


In [48]:
psource(minimal_consistent_det)


def minimal_consistent_det(E, A):
    """Return a minimal set of attributes which give consistent determination"""
    n = len(A)

    for i in range(n + 1):
        for A_i in combinations(A, i):
            if consistent_det(A_i, E):
                return set(A_i)

In [49]:
psource(consistent_det)


def consistent_det(A, E):
    """Check if the attributes(A) is consistent with the examples(E)"""
    H = {}

    for e in E:
        attr_values = tuple(e[attr] for attr in A)
        if attr_values in H and H[attr_values] != e['GOAL']:
            return False
        H[attr_values] = e['GOAL']

    return True

Example:

We already know that no-pizza-no-party but we will still check it through the minimal_consistent_det algorithm.


In [39]:
print(minimal_consistent_det(party, {'Pizza', 'Soda'}))


{'Pizza'}

We can also check it on some other example. Let's consider the following example :


In [40]:
conductance = [
    {'Sample': 'S1', 'Mass': 12, 'Temp': 26, 'Material': 'Cu', 'Size': 3, 'GOAL': 0.59},
    {'Sample': 'S1', 'Mass': 12, 'Temp': 100, 'Material': 'Cu', 'Size': 3, 'GOAL': 0.57},
    {'Sample': 'S2', 'Mass': 24, 'Temp': 26, 'Material': 'Cu', 'Size': 6, 'GOAL': 0.59},
    {'Sample': 'S3', 'Mass': 12, 'Temp': 26, 'Material': 'Pb', 'Size': 2, 'GOAL': 0.05},
    {'Sample': 'S3', 'Mass': 12, 'Temp': 100, 'Material': 'Pb', 'Size': 2, 'GOAL': 0.04},
    {'Sample': 'S4', 'Mass': 18, 'Temp': 100, 'Material': 'Pb', 'Size': 3, 'GOAL': 0.04},
    {'Sample': 'S4', 'Mass': 18, 'Temp': 100, 'Material': 'Pb', 'Size': 3, 'GOAL': 0.04},
    {'Sample': 'S5', 'Mass': 24, 'Temp': 100, 'Material': 'Pb', 'Size': 4, 'GOAL': 0.04},
    {'Sample': 'S6', 'Mass': 36, 'Temp': 26, 'Material': 'Pb', 'Size': 6, 'GOAL': 0.05},
]

Now, we check the minimal_consistent_det algorithm on the above example:


In [41]:
print(minimal_consistent_det(conductance, {'Mass', 'Temp', 'Material', 'Size'}))


{'Temp', 'Material'}

In [43]:
print(minimal_consistent_det(conductance, {'Mass', 'Temp', 'Size'}))


{'Temp', 'Size', 'Mass'}