Relation extraction using distant supervision: experiments


In [1]:
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"

Overview

OK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.

Just like we did in the unit on supervised sentiment analysis, we'll leverage the sklearn library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results.

Set-up

See the first notebook in this unit for set-up instructions.


In [2]:
from collections import Counter
import os
import rel_ext
import utils

In [3]:
# Set all the random seeds for reproducibility. Only the
# system seed is relevant for this notebook.

utils.fix_random_seeds()

In [4]:
rel_ext_data_home = os.path.join('data', 'rel_ext_data')

With the following steps, we build up the dataset we'll use for experiments; it unites a corpus and a knowledge base in the way we described in the previous notebook.


In [5]:
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))

In [6]:
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))

In [7]:
dataset = rel_ext.Dataset(corpus, kb)

The following code splits up our data in a way that supports experimentation:


In [8]:
splits = dataset.build_splits()

splits


Out[8]:
{'tiny': Corpus with 3,474 examples; KB with 445 triples,
 'train': Corpus with 249,003 examples; KB with 34,229 triples,
 'dev': Corpus with 79,219 examples; KB with 11,210 triples,
 'all': Corpus with 331,696 examples; KB with 45,884 triples}

Building a classifier

Featurizers

Featurizers are functions which define the feature representation for our model. The primary input to a featurizer will be the KBTriple for which we are generating features. But since our features will be derived from corpus examples containing the entities of the KBTriple, we must also pass in a reference to a Corpus. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.

Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the KBTriple, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples.


In [9]:
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
    for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
        for word in ex.middle.split(' '):
            feature_counter[word] += 1
    for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
        for word in ex.middle.split(' '):
            feature_counter[word] += 1
    return feature_counter

Here's how this featurizer works on a single example:


In [10]:
kbt = kb.kb_triples[0]

kbt


Out[10]:
KBTriple(rel='contains', sbj='Brickfields', obj='Kuala_Lumpur_Sentral_railway_station')

In [11]:
corpus.get_examples_for_entities(kbt.sbj, kbt.obj)[0].middle


Out[11]:
'it was just a quick 10-minute walk to'

In [12]:
simple_bag_of_words_featurizer(kb.kb_triples[0], corpus, Counter())


Out[12]:
Counter({'it': 1,
         'was': 1,
         'just': 1,
         'a': 1,
         'quick': 1,
         '10-minute': 1,
         'walk': 1,
         'to': 2,
         'the': 1})

You can experiment with adding new kinds of features just by implementing additional featurizers, following simple_bag_of_words_featurizer as an example.

Now, in order to apply machine learning algorithms such as those provided by sklearn, we need a way to convert datasets of KBTriples into feature matrices. The following steps achieve that:


In [13]:
kbts_by_rel, labels_by_rel = dataset.build_dataset()

featurized = dataset.featurize(kbts_by_rel, featurizers=[simple_bag_of_words_featurizer])

Experiments

Now we need some functions to train models, make predictions, and evaluate the results. We'll start with train_models(). This function takes as arguments a dictionary of data splits, a list of featurizers, the name of the split on which to train (by default, 'train'), and a model factory, which is a function which initializes an sklearn classifier (by default, a logistic regression classifier). It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation.


In [14]:
train_result = rel_ext.train_models(
    splits, 
    featurizers=[simple_bag_of_words_featurizer])

Next comes predict(). This function takes as arguments a dictionary of data splits, the outputs of train_models(), and the name of the split for which to make predictions. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by relation).


In [15]:
predictions, true_labels = rel_ext.predict(
    splits, train_result, split_name='dev')

Now evaluate_predictions(). This function takes as arguments the parallel dictionaries of predictions and true labels produced by predict(). It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score.


In [16]:
rel_ext.evaluate_predictions(predictions, true_labels)


relation              precision     recall    f-score    support       size
------------------    ---------  ---------  ---------  ---------  ---------
adjoins                   0.832      0.378      0.671        407       7057
author                    0.779      0.525      0.710        657       7307
capital                   0.638      0.294      0.517        126       6776
contains                  0.783      0.608      0.740       4487      11137
film_performance          0.796      0.591      0.745        984       7634
founders                  0.783      0.384      0.648        469       7119
genre                     0.654      0.166      0.412        205       6855
has_sibling               0.865      0.246      0.576        625       7275
has_spouse                0.878      0.342      0.668        754       7404
is_a                      0.731      0.238      0.517        618       7268
nationality               0.555      0.171      0.383        386       7036
parents                   0.862      0.544      0.771        390       7040
place_of_birth            0.637      0.206      0.449        282       6932
place_of_death            0.512      0.100      0.282        209       6859
profession                0.716      0.205      0.477        308       6958
worked_at                 0.688      0.254      0.513        303       6953
------------------    ---------  ---------  ---------  ---------  ---------
macro-average             0.732      0.328      0.567      11210     117610
Out[16]:
0.5674055479292028

Finally, we introduce rel_ext.experiment(), which basically chains together rel_ext.train_models(), rel_ext.predict(), and rel_ext.evaluate_predictions(). For convenience, this function returns the output of rel_ext.train_models() as its result.

Running rel_ext.experiment() in its default configuration will give us a baseline result for machine-learned models.


In [17]:
_ = rel_ext.experiment(
    splits,
    featurizers=[simple_bag_of_words_featurizer])


relation              precision     recall    f-score    support       size
------------------    ---------  ---------  ---------  ---------  ---------
adjoins                   0.832      0.378      0.671        407       7057
author                    0.779      0.525      0.710        657       7307
capital                   0.638      0.294      0.517        126       6776
contains                  0.783      0.608      0.740       4487      11137
film_performance          0.796      0.591      0.745        984       7634
founders                  0.783      0.384      0.648        469       7119
genre                     0.654      0.166      0.412        205       6855
has_sibling               0.865      0.246      0.576        625       7275
has_spouse                0.878      0.342      0.668        754       7404
is_a                      0.731      0.238      0.517        618       7268
nationality               0.555      0.171      0.383        386       7036
parents                   0.862      0.544      0.771        390       7040
place_of_birth            0.637      0.206      0.449        282       6932
place_of_death            0.512      0.100      0.282        209       6859
profession                0.716      0.205      0.477        308       6958
worked_at                 0.688      0.254      0.513        303       6953
------------------    ---------  ---------  ---------  ---------  ---------
macro-average             0.732      0.328      0.567      11210     117610

Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our top_k_middles_classifier from the previous notebook. This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.

But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down.

Analysis

Examining the trained models

One important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators?


In [18]:
rel_ext.examine_model_weights(train_result)


Highest and lowest feature weights for relation adjoins:

     2.511 Córdoba
     2.467 Taluks
     2.434 Valais
     ..... .....
    -1.143 for
    -1.186 Egypt
    -1.277 America

Highest and lowest feature weights for relation author:

     3.055 author
     3.032 books
     2.342 by
     ..... .....
    -2.002 directed
    -2.019 or
    -2.211 poetry

Highest and lowest feature weights for relation capital:

     3.922 capital
     2.163 especially
     2.155 city
     ..... .....
    -1.238 and
    -1.263 being
    -1.959 borough

Highest and lowest feature weights for relation contains:

     2.768 bordered
     2.716 third-largest
     2.219 tiny
     ..... .....
    -3.502 Midlands
    -3.954 Siege
    -3.969 destroyed

Highest and lowest feature weights for relation film_performance:

     4.004 starring
     3.731 alongside
     3.199 opposite
     ..... .....
    -1.702 then
    -1.840 She
    -1.889 Genghis

Highest and lowest feature weights for relation founders:

     3.677 founded
     3.276 founder
     2.779 label
     ..... .....
    -1.795 William
    -1.850 Griffith
    -1.854 Wilson

Highest and lowest feature weights for relation genre:

     3.092 series
     2.800 game
     2.622 album
     ..... .....
    -1.296 animated
    -1.434 and
    -1.949 at

Highest and lowest feature weights for relation has_sibling:

     5.196 brother
     3.933 sister
     2.747 nephew
     ..... .....
    -1.293 '
    -1.312 from
    -1.437 including

Highest and lowest feature weights for relation has_spouse:

     5.319 wife
     4.652 married
     4.617 husband
     ..... .....
    -1.528 between
    -1.559 MTV
    -1.599 Terri

Highest and lowest feature weights for relation is_a:

     3.182 family
     2.898 philosopher
     2.623 
     ..... .....
    -1.411 now
    -1.441 beans
    -1.618 at

Highest and lowest feature weights for relation nationality:

     2.887 born
     1.933 president
     1.843 caliph
     ..... .....
    -1.467 or
    -1.540 ;
    -1.729 American

Highest and lowest feature weights for relation parents:

     5.108 son
     4.437 father
     4.400 daughter
     ..... .....
    -1.053 a
    -1.070 England
    -1.210 in

Highest and lowest feature weights for relation place_of_birth:

     3.980 born
     2.843 birthplace
     2.702 mayor
     ..... .....
    -1.276 Mughal
    -1.392 or
    -1.426 and

Highest and lowest feature weights for relation place_of_death:

     2.161 assassinated
     2.027 died
     1.837 Germany
     ..... .....
    -1.246 ;
    -1.256 as
    -1.474 Siege

Highest and lowest feature weights for relation profession:

     3.148 
     2.727 American
     2.635 philosopher
     ..... .....
    -1.212 at
    -1.348 in
    -1.986 on

Highest and lowest feature weights for relation worked_at:

     3.107 president
     2.913 head
     2.743 professor
     ..... .....
    -1.134 province
    -1.150 author
    -1.714 or

By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)

The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some other relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the author relation.)

Optional exercise: Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?

Discovering new relation instances

Another way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?

Because the goal is to discover new relation instances which are true but absent from the KB, we can't evaluate this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.

To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation.


In [19]:
rel_ext.find_new_relation_instances(
    dataset,
    featurizers=[simple_bag_of_words_featurizer])


Highest probability examples for relation adjoins:

     1.000 KBTriple(rel='adjoins', sbj='Canada', obj='Vancouver')
     1.000 KBTriple(rel='adjoins', sbj='Vancouver', obj='Canada')
     1.000 KBTriple(rel='adjoins', sbj='Australia', obj='Sydney')
     1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='Australia')
     1.000 KBTriple(rel='adjoins', sbj='Mexico', obj='Atlantic_Ocean')
     1.000 KBTriple(rel='adjoins', sbj='Atlantic_Ocean', obj='Mexico')
     1.000 KBTriple(rel='adjoins', sbj='Dubai', obj='United_Arab_Emirates')
     1.000 KBTriple(rel='adjoins', sbj='United_Arab_Emirates', obj='Dubai')
     1.000 KBTriple(rel='adjoins', sbj='Sydney', obj='New_South_Wales')
     1.000 KBTriple(rel='adjoins', sbj='New_South_Wales', obj='Sydney')

Highest probability examples for relation author:

     1.000 KBTriple(rel='author', sbj='Oliver_Twist', obj='Charles_Dickens')
     1.000 KBTriple(rel='author', sbj='Jane_Austen', obj='Pride_and_Prejudice')
     1.000 KBTriple(rel='author', sbj='Iliad', obj='Homer')
     1.000 KBTriple(rel='author', sbj='Divine_Comedy', obj='Dante_Alighieri')
     1.000 KBTriple(rel='author', sbj='Pride_and_Prejudice', obj='Jane_Austen')
     1.000 KBTriple(rel='author', sbj="Euclid's_Elements", obj='Euclid')
     1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception')
     1.000 KBTriple(rel='author', sbj="Uncle_Tom's_Cabin", obj='Harriet_Beecher_Stowe')
     1.000 KBTriple(rel='author', sbj='Ray_Bradbury', obj='Fahrenheit_451')
     1.000 KBTriple(rel='author', sbj='A_Christmas_Carol', obj='Charles_Dickens')

Highest probability examples for relation capital:

     1.000 KBTriple(rel='capital', sbj='Delhi', obj='India')
     1.000 KBTriple(rel='capital', sbj='Bangladesh', obj='Dhaka')
     1.000 KBTriple(rel='capital', sbj='India', obj='Delhi')
     1.000 KBTriple(rel='capital', sbj='Lucknow', obj='Uttar_Pradesh')
     1.000 KBTriple(rel='capital', sbj='Chengdu', obj='Sichuan')
     1.000 KBTriple(rel='capital', sbj='Dhaka', obj='Bangladesh')
     1.000 KBTriple(rel='capital', sbj='Uttar_Pradesh', obj='Lucknow')
     1.000 KBTriple(rel='capital', sbj='Sichuan', obj='Chengdu')
     1.000 KBTriple(rel='capital', sbj='Bandung', obj='West_Java')
     1.000 KBTriple(rel='capital', sbj='West_Java', obj='Bandung')

Highest probability examples for relation contains:

     1.000 KBTriple(rel='contains', sbj='Delhi', obj='India')
     1.000 KBTriple(rel='contains', sbj='Dubai', obj='United_Arab_Emirates')
     1.000 KBTriple(rel='contains', sbj='Campania', obj='Naples')
     1.000 KBTriple(rel='contains', sbj='India', obj='Uttarakhand')
     1.000 KBTriple(rel='contains', sbj='Bangladesh', obj='Dhaka')
     1.000 KBTriple(rel='contains', sbj='India', obj='Delhi')
     1.000 KBTriple(rel='contains', sbj='Uttarakhand', obj='India')
     1.000 KBTriple(rel='contains', sbj='Australia', obj='Melbourne')
     1.000 KBTriple(rel='contains', sbj='Palawan', obj='Philippines')
     1.000 KBTriple(rel='contains', sbj='Canary_Islands', obj='Tenerife')

Highest probability examples for relation film_performance:

     1.000 KBTriple(rel='film_performance', sbj='Amitabh_Bachchan', obj='Mohabbatein')
     1.000 KBTriple(rel='film_performance', sbj='Mohabbatein', obj='Amitabh_Bachchan')
     1.000 KBTriple(rel='film_performance', sbj='A_Christmas_Carol', obj='Charles_Dickens')
     1.000 KBTriple(rel='film_performance', sbj='Charles_Dickens', obj='A_Christmas_Carol')
     1.000 KBTriple(rel='film_performance', sbj='De-Lovely', obj='Kevin_Kline')
     1.000 KBTriple(rel='film_performance', sbj='Kevin_Kline', obj='De-Lovely')
     1.000 KBTriple(rel='film_performance', sbj='Akshay_Kumar', obj='Sonakshi_Sinha')
     1.000 KBTriple(rel='film_performance', sbj='Sonakshi_Sinha', obj='Akshay_Kumar')
     1.000 KBTriple(rel='film_performance', sbj='Iliad', obj='Homer')
     1.000 KBTriple(rel='film_performance', sbj='Homer', obj='Iliad')

Highest probability examples for relation founders:

     1.000 KBTriple(rel='founders', sbj='Iliad', obj='Homer')
     1.000 KBTriple(rel='founders', sbj='Homer', obj='Iliad')
     1.000 KBTriple(rel='founders', sbj='William_C._Durant', obj='Louis_Chevrolet')
     1.000 KBTriple(rel='founders', sbj='Louis_Chevrolet', obj='William_C._Durant')
     1.000 KBTriple(rel='founders', sbj='Mongol_Empire', obj='Genghis_Khan')
     1.000 KBTriple(rel='founders', sbj='Genghis_Khan', obj='Mongol_Empire')
     1.000 KBTriple(rel='founders', sbj='Elon_Musk', obj='SpaceX')
     1.000 KBTriple(rel='founders', sbj='SpaceX', obj='Elon_Musk')
     1.000 KBTriple(rel='founders', sbj='Marvel_Comics', obj='Stan_Lee')
     1.000 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics')

Highest probability examples for relation genre:

     1.000 KBTriple(rel='genre', sbj='Oliver_Twist', obj='Charles_Dickens')
     1.000 KBTriple(rel='genre', sbj='Charles_Dickens', obj='Oliver_Twist')
     0.999 KBTriple(rel='genre', sbj='Mark_Twain_Tonight', obj='Hal_Holbrook')
     0.999 KBTriple(rel='genre', sbj='Hal_Holbrook', obj='Mark_Twain_Tonight')
     0.997 KBTriple(rel='genre', sbj='The_Dark_Side_of_the_Moon', obj='Pink_Floyd')
     0.997 KBTriple(rel='genre', sbj='Pink_Floyd', obj='The_Dark_Side_of_the_Moon')
     0.991 KBTriple(rel='genre', sbj='Andrew_Garfield', obj='Sam_Raimi')
     0.991 KBTriple(rel='genre', sbj='Sam_Raimi', obj='Andrew_Garfield')
     0.981 KBTriple(rel='genre', sbj='Life_of_Pi', obj='Man_Booker_Prize')
     0.981 KBTriple(rel='genre', sbj='Man_Booker_Prize', obj='Life_of_Pi')

Highest probability examples for relation has_sibling:

     1.000 KBTriple(rel='has_sibling', sbj='Jess_Margera', obj='April_Margera')
     1.000 KBTriple(rel='has_sibling', sbj='April_Margera', obj='Jess_Margera')
     1.000 KBTriple(rel='has_sibling', sbj='Lincoln_Borglum', obj='Gutzon_Borglum')
     1.000 KBTriple(rel='has_sibling', sbj='Gutzon_Borglum', obj='Lincoln_Borglum')
     1.000 KBTriple(rel='has_sibling', sbj='Rufus_Wainwright', obj='Kate_McGarrigle')
     1.000 KBTriple(rel='has_sibling', sbj='Kate_McGarrigle', obj='Rufus_Wainwright')
     1.000 KBTriple(rel='has_sibling', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great')
     1.000 KBTriple(rel='has_sibling', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon')
     1.000 KBTriple(rel='has_sibling', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson')
     1.000 KBTriple(rel='has_sibling', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman')

Highest probability examples for relation has_spouse:

     1.000 KBTriple(rel='has_spouse', sbj='Akhenaten', obj='Tutankhamun')
     1.000 KBTriple(rel='has_spouse', sbj='Tutankhamun', obj='Akhenaten')
     1.000 KBTriple(rel='has_spouse', sbj='United_Artists', obj='Douglas_Fairbanks')
     1.000 KBTriple(rel='has_spouse', sbj='Douglas_Fairbanks', obj='United_Artists')
     1.000 KBTriple(rel='has_spouse', sbj='Ronald_Goldman', obj='Nicole_Brown_Simpson')
     1.000 KBTriple(rel='has_spouse', sbj='Nicole_Brown_Simpson', obj='Ronald_Goldman')
     1.000 KBTriple(rel='has_spouse', sbj='William_C._Durant', obj='Louis_Chevrolet')
     1.000 KBTriple(rel='has_spouse', sbj='Louis_Chevrolet', obj='William_C._Durant')
     1.000 KBTriple(rel='has_spouse', sbj='England', obj='Charles_II_of_England')
     1.000 KBTriple(rel='has_spouse', sbj='Charles_II_of_England', obj='England')

Highest probability examples for relation is_a:

     1.000 KBTriple(rel='is_a', sbj='Canada', obj='Vancouver')
     1.000 KBTriple(rel='is_a', sbj='Felidae', obj='Panthera')
     1.000 KBTriple(rel='is_a', sbj='Panthera', obj='Felidae')
     1.000 KBTriple(rel='is_a', sbj='Vancouver', obj='Canada')
     1.000 KBTriple(rel='is_a', sbj='Phasianidae', obj='Bird')
     1.000 KBTriple(rel='is_a', sbj='Bird', obj='Phasianidae')
     1.000 KBTriple(rel='is_a', sbj='Accipitridae', obj='Bird')
     1.000 KBTriple(rel='is_a', sbj='Bird', obj='Accipitridae')
     1.000 KBTriple(rel='is_a', sbj='Automobile', obj='South_Korea')
     1.000 KBTriple(rel='is_a', sbj='South_Korea', obj='Automobile')

Highest probability examples for relation nationality:

     1.000 KBTriple(rel='nationality', sbj='Titus', obj='Roman_Empire')
     1.000 KBTriple(rel='nationality', sbj='Roman_Empire', obj='Titus')
     1.000 KBTriple(rel='nationality', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great')
     1.000 KBTriple(rel='nationality', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon')
     1.000 KBTriple(rel='nationality', sbj='Mongol_Empire', obj='Genghis_Khan')
     1.000 KBTriple(rel='nationality', sbj='Genghis_Khan', obj='Mongol_Empire')
     1.000 KBTriple(rel='nationality', sbj='Norodom_Sihamoni', obj='Cambodia')
     1.000 KBTriple(rel='nationality', sbj='Cambodia', obj='Norodom_Sihamoni')
     1.000 KBTriple(rel='nationality', sbj='Tamil_Nadu', obj='Ramanathapuram_district')
     1.000 KBTriple(rel='nationality', sbj='Ramanathapuram_district', obj='Tamil_Nadu')

Highest probability examples for relation parents:

     1.000 KBTriple(rel='parents', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great')
     1.000 KBTriple(rel='parents', sbj='Lincoln_Borglum', obj='Gutzon_Borglum')
     1.000 KBTriple(rel='parents', sbj='Gutzon_Borglum', obj='Lincoln_Borglum')
     1.000 KBTriple(rel='parents', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon')
     1.000 KBTriple(rel='parents', sbj='Anne_Boleyn', obj='Thomas_Boleyn,_1st_Earl_of_Wiltshire')
     1.000 KBTriple(rel='parents', sbj='Thomas_Boleyn,_1st_Earl_of_Wiltshire', obj='Anne_Boleyn')
     1.000 KBTriple(rel='parents', sbj='Jess_Margera', obj='April_Margera')
     1.000 KBTriple(rel='parents', sbj='April_Margera', obj='Jess_Margera')
     1.000 KBTriple(rel='parents', sbj='Saddam_Hussein', obj='Uday_Hussein')
     1.000 KBTriple(rel='parents', sbj='Uday_Hussein', obj='Saddam_Hussein')

Highest probability examples for relation place_of_birth:

     1.000 KBTriple(rel='place_of_birth', sbj='Lucknow', obj='Uttar_Pradesh')
     1.000 KBTriple(rel='place_of_birth', sbj='Uttar_Pradesh', obj='Lucknow')
     0.999 KBTriple(rel='place_of_birth', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great')
     0.999 KBTriple(rel='place_of_birth', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon')
     0.999 KBTriple(rel='place_of_birth', sbj='Nepal', obj='Bagmati_Zone')
     0.999 KBTriple(rel='place_of_birth', sbj='Bagmati_Zone', obj='Nepal')
     0.998 KBTriple(rel='place_of_birth', sbj='Chengdu', obj='Sichuan')
     0.998 KBTriple(rel='place_of_birth', sbj='Sichuan', obj='Chengdu')
     0.998 KBTriple(rel='place_of_birth', sbj='San_Antonio', obj='Actor')
     0.998 KBTriple(rel='place_of_birth', sbj='Actor', obj='San_Antonio')

Highest probability examples for relation place_of_death:

     1.000 KBTriple(rel='place_of_death', sbj='Philip_II_of_Macedon', obj='Alexander_the_Great')
     1.000 KBTriple(rel='place_of_death', sbj='Alexander_the_Great', obj='Philip_II_of_Macedon')
     1.000 KBTriple(rel='place_of_death', sbj='Titus', obj='Roman_Empire')
     1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Titus')
     1.000 KBTriple(rel='place_of_death', sbj='Lucknow', obj='Uttar_Pradesh')
     1.000 KBTriple(rel='place_of_death', sbj='Uttar_Pradesh', obj='Lucknow')
     1.000 KBTriple(rel='place_of_death', sbj='Chengdu', obj='Sichuan')
     1.000 KBTriple(rel='place_of_death', sbj='Sichuan', obj='Chengdu')
     1.000 KBTriple(rel='place_of_death', sbj='Roman_Empire', obj='Trajan')
     1.000 KBTriple(rel='place_of_death', sbj='Trajan', obj='Roman_Empire')

Highest probability examples for relation profession:

     1.000 KBTriple(rel='profession', sbj='Canada', obj='Vancouver')
     1.000 KBTriple(rel='profession', sbj='Vancouver', obj='Canada')
     1.000 KBTriple(rel='profession', sbj='Little_Women', obj='Louisa_May_Alcott')
     1.000 KBTriple(rel='profession', sbj='Louisa_May_Alcott', obj='Little_Women')
     0.999 KBTriple(rel='profession', sbj='Aldous_Huxley', obj='Eyeless_in_Gaza')
     0.999 KBTriple(rel='profession', sbj='Eyeless_in_Gaza', obj='Aldous_Huxley')
     0.999 KBTriple(rel='profession', sbj='Jess_Margera', obj='April_Margera')
     0.999 KBTriple(rel='profession', sbj='April_Margera', obj='Jess_Margera')
     0.999 KBTriple(rel='profession', sbj='Actor', obj='Screenwriter')
     0.999 KBTriple(rel='profession', sbj='Screenwriter', obj='Actor')

Highest probability examples for relation worked_at:

     1.000 KBTriple(rel='worked_at', sbj='William_C._Durant', obj='Louis_Chevrolet')
     1.000 KBTriple(rel='worked_at', sbj='Louis_Chevrolet', obj='William_C._Durant')
     1.000 KBTriple(rel='worked_at', sbj='Iliad', obj='Homer')
     1.000 KBTriple(rel='worked_at', sbj='Homer', obj='Iliad')
     1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Stan_Lee')
     1.000 KBTriple(rel='worked_at', sbj='Stan_Lee', obj='Marvel_Comics')
     1.000 KBTriple(rel='worked_at', sbj='Mongol_Empire', obj='Genghis_Khan')
     1.000 KBTriple(rel='worked_at', sbj='Genghis_Khan', obj='Mongol_Empire')
     1.000 KBTriple(rel='worked_at', sbj='Comic_book', obj='Marvel_Comics')
     1.000 KBTriple(rel='worked_at', sbj='Marvel_Comics', obj='Comic_book')

There are actually some good discoveries here! The predictions for the author relation seem especially good. Of course, there are also plenty of bad results, and a few that are downright comical. We may hope that as we improve our models and optimize performance in our automatic evaluations, the results we observe in this manual evaluation improve as well.

Optional exercise: Note that every time we predict that a given relation holds between entities X and Y, we also predict, with equal confidence, that it holds between Y and X. Why? How could we fix this?

[ top ]