When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically
In the assignment you will
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
As usual we need to first import the Python packages that we will need.
In [1]:
import graphlab
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).
In [2]:
wiki = graphlab.SFrame('people_wiki.gl')
In [3]:
wiki
Out[3]:
As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in wiki
.
In [4]:
wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])
In [5]:
wiki
Out[5]:
Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.
In [6]:
model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],
method='brute_force', distance='euclidean')
Let's look at the top 10 nearest neighbors by performing the following query:
In [7]:
model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)
Out[7]:
All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.
For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages:
In [8]:
def top_words(name):
"""
Get a table of the most frequent words in the given person's wikipedia page.
"""
row = wiki[wiki['name'] == name]
word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])
return word_count_table.sort('count', ascending=False)
In [9]:
obama_words = top_words('Barack Obama')
obama_words
Out[9]:
In [10]:
barrio_words = top_words('Francisco Barrio')
barrio_words
Out[10]:
Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See the documentation for more details.
For instance, running
obama_words.join(barrio_words, on='word')
will extract the rows from both tables that correspond to the common words.
In [11]:
combined_words = obama_words.join(barrio_words, on='word')
combined_words
Out[11]:
Since both tables contained the column named count
, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count
) is for Obama and the second (count.1
) for Barrio.
In [12]:
combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})
combined_words
Out[12]:
Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False
to display largest counts first.
In [13]:
combined_words.sort('Obama', ascending=False)
Out[13]:
Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Hint:
has_top_words
to accomplish the task.set(common_words)
where common_words
is a Python list. See this link if you're curious about Python sets.keys()
method.issubset()
method to check if all 5 words are among the keys.has_top_words
function on every row of the SFrame.
In [22]:
top_5_words = combined_words.sort('Obama', ascending=False)[0:5]['word']
In [28]:
common_words = set(top_5_words)
def has_top_words(word_count_vector):
for word in common_words:
if(word not in word_count_vector):
return False
return True # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
wiki['has_top_words'].sum() # YOUR CODE HERE
Out[28]:
Checkpoint. Check your has_top_words
function on two random articles:
In [29]:
print 'Output from your function:', has_top_words(wiki[32]['word_count'])
print 'Correct output: True'
print 'Also check the length of unique_words. It should be 167'
In [30]:
print 'Output from your function:', has_top_words(wiki[33]['word_count'])
print 'Correct output: False'
print 'Also check the length of unique_words. It should be 188'
Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Hint: To compute the Euclidean distance between two dictionaries, use graphlab.toolkits.distances.euclidean
. Refer to this link for usage.
In [42]:
obama_word_count = wiki[wiki['name']=='Barack Obama']['word_count'][0]
bush_word_count = wiki[wiki['name']=='George W. Bush']['word_count'][0]
biden_word_count = wiki[wiki['name']=='Joe Biden']['word_count'][0]
from graphlab.toolkits.distances import euclidean
print euclidean(obama_word_count, biden_word_count)
print euclidean(bush_word_count, biden_word_count)
print euclidean(obama_word_count, bush_word_count)
Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
In [50]:
bush_words = top_words('George W. Bush')
obama_bush = bush_words.join(obama_words, on='word').rename({'count': 'bush', 'count.1': 'obama'})
obama_bush.sort('obama', ascending=False)[0:10]['word']
Out[50]:
Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons.
To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. TF-IDF (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama:
In [51]:
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])
In [52]:
model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='euclidean')
In [53]:
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
Out[53]:
Let's determine whether this list makes sense.
Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.
In [54]:
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])
return word_count_table.sort('weight', ascending=False)
In [55]:
obama_tf_idf = top_words_tf_idf('Barack Obama')
obama_tf_idf
Out[55]:
In [56]:
schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')
schiliro_tf_idf
Out[56]:
Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
In [59]:
common_tf_idf = obama_tf_idf.join(schiliro_tf_idf, on='word').rename({'weight':'obama', 'weight.1':'schiliro'}).sort('obama', ascending=False)
common_tf_idf
Out[59]:
The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011.
Quiz Question. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
In [72]:
common_words = set(common_tf_idf['word'][0:5]) # YOUR CODE HERE
def has_top_words(word_count_vector):
unique_words = set(word_count_vector.keys())
return common_words <= unique_words # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
wiki['has_top_words'].sum() # YOUR CODE HERE
Out[72]:
Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of model_tf_idf
. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden.
Quiz Question. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match.
In [73]:
obama_tfidf = wiki[wiki['name']=='Barack Obama']['tf_idf'][0]
bush_tfidf = wiki[wiki['name']=='George W. Bush']['tf_idf'][0]
biden_tfidf = wiki[wiki['name']=='Joe Biden']['tf_idf'][0]
from graphlab.toolkits.distances import euclidean
print euclidean(obama_tfidf, biden_tfidf)
print euclidean(bush_tfidf, biden_tfidf)
print euclidean(obama_tfidf, bush_tfidf)
The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability:
In [74]:
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
Out[74]:
But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
In [75]:
def compute_length(row):
return len(row['text'].split(' '))
wiki['length'] = wiki.apply(compute_length)
In [76]:
nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})
In [77]:
nearest_neighbors_euclidean.sort('rank')
Out[77]:
To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
In [78]:
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([0, 1000, 0, 0.04])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 300 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many of the Wikipedia articles are 300 words or more, and both Obama and Biden are over 300 words long.
Note: For the interest of computation time, the dataset given here contains excerpts of the articles rather than full text. For instance, the actual Wikipedia article about Obama is around 25000 words. Do not be surprised by the low numbers shown in the histogram.
Note: Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them.
To remove this bias, we turn to cosine distances: $$ d(\mathbf{x},\mathbf{y}) = 1 - \frac{\mathbf{x}^T\mathbf{y}}{\|\mathbf{x}\| \|\mathbf{y}\|} $$ Cosine distances let us compare word distributions of two articles of varying lengths.
Let us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors.
In [79]:
model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
In [80]:
nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})
In [81]:
nearest_neighbors_cosine.sort('rank')
Out[81]:
From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.
In [82]:
plt.figure(figsize=(10.5,4.5))
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([0, 1000, 0, 0.04])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
Moral of the story: In deciding the features and distance measures, check if they produce results that make sense for your particular application.
Happily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example.
+--------------------------------------------------------+
| +--------+ |
| One that shall not be named | Follow | |
| @username +--------+ |
| |
| Democratic governments control law in response to |
| popular act. |
| |
| 8:05 AM - 16 May 2016 |
| |
| Reply Retweet (1,332) Like (300) |
| |
+--------------------------------------------------------+
How similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.)
In [83]:
sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})
sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])
encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')
encoder.fit(wiki)
sf = encoder.transform(sf)
sf
Out[83]:
Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
In [84]:
tweet_tf_idf = sf[0]['tf_idf.word_count']
tweet_tf_idf
Out[84]:
In [86]:
obama = wiki[wiki['name'] == 'Barack Obama']
obama
Out[86]:
Now, compute the cosine distance between the Barack Obama article and this tweet:
In [87]:
obama_tf_idf = obama[0]['tf_idf']
graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)
Out[87]:
Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors:
In [88]:
model2_tf_idf.query(obama, label='name', k=10)
Out[88]:
With cosine distances, the tweet is "nearer" to Barack Obama than everyone else, except for Joe Biden! This probably is not something we want. If someone is reading the Barack Obama Wikipedia page, would you want to recommend they read this tweet? Ignoring article lengths completely resulted in nonsensical results. In practice, it is common to enforce maximum or minimum document lengths. After all, when someone is reading a long article from The Atlantic, you wouldn't recommend him/her a tweet.