Entity Resolution

Entity Resolution is the term to describe the process of linking records from one data source with another that describe the same entity.

We will use data files from the metric-learning project:

  • Google.txt, the Google Products dataset
  • Amazon.txt, the Amazon dataset
  • Google_small.txt, 200 records sampled from the Google data
  • Amazon_small.txt, 200 records sampled from the Amazon data
  • stopwords.txt, a list of common English words

EXERCISE: Load the datasets directly into RDDs using sc.textFile. Use 4 partitions of the data.

The answer should be:


Loaded googleSmall dataset with size 200
Loaded amazonSmall dataset with size 200
Loaded google dataset with size 3226
Loaded amazon dataset with size 1363

In [1]:
googleSmallRDD = sc.textFile('data/googleSmall.txt', 4)          
print "Loaded googleSmall dataset with size %d" % googleSmallRDD.count()

amazonSmallRDD = sc.textFile('data/amazonSmall.txt', 4)          
print "Loaded amazonSmall dataset with size %d" % amazonSmallRDD.count()

googleRDD = sc.textFile('data/google.txt', 4)          
print "Loaded google dataset with size %d" % googleRDD.count()

amazonRDD = sc.textFile('data/amazon.txt', 4)          
print "Loaded amazon dataset with size %d" % amazonRDD.count()


Loaded googleSmall dataset with size 200
Loaded amazonSmall dataset with size 200
Loaded google dataset with size 3226
Loaded amazon dataset with size 1363

Let's examine some of the lines in the RDDs.


In [6]:
print "This is the aspect of lines from the google dataset:"
print " "

for line in googleSmallRDD.take(3):
    fields = line.split(';')
    print "ID: %s" % fields[0]
    print "CONTENT: %s" % fields[1]
    print " "

print "This is the aspect of lines from the amazon dataset:"
print " "

for line in amazonSmallRDD.take(3):
    fields = line.split(';')
    print "ID: %s" % fields[0]
    print "CONTENT: %s" % fields[1]
    print " "


This is the aspect of lines from the google dataset:
 
ID: http://www.google.com/base/feeds/snippets/11448761432933644608
CONTENT: spanish vocabulary builder "expand your vocabulary! contains fun lessons that both teach and entertain you'll quickly find yourself mastering new terms. includes games and more!" 
 
ID: http://www.google.com/base/feeds/snippets/8175198959985911471
CONTENT: topics presents: museums of world "5 cd-rom set. step behind the velvet rope to examine some of the most treasured collections of antiquities art and inventions. includes the following the louvre - virtual visit 25 rooms in full screen interactive video detailed map of the louvre ..." 
 
ID: http://www.google.com/base/feeds/snippets/18445827127704822533
CONTENT: sierrahome hse hallmark card studio special edition win 98 me 2000 xp "hallmark card studio special edition (win 98 me 2000 xp)" "sierrahome"
 
This is the aspect of lines from the amazon dataset:
 
ID: b000jz4hqo
CONTENT: clickart 950 000 - premier image pack (dvd-rom)  "broderbund"
 
ID: b0006zf55o
CONTENT: ca international - arcserve lap/desktop oem 30pk "oem arcserve backup v11.1 win 30u for laptops and desktops" "computer associates"
 
ID: b00004tkvy
CONTENT: noah's ark activity center (jewel case ages 3-8)  "victory multimedia"
 

In what follows we will use the ID as a key, and the content will be the text needed for matching among products. The similarity measure will be a cosine distance operating on Bag-of-words. In the next sections we will learn to implement such a similarity.

First of all, we need to build a function to transform a string into a list of terms (tokens).

EXERCISE: Complete the definition of the 'tokeniza' function, and do not forget:

  • to lower case the text
  • to remove punctuation signs
  • to eliminate empty tokens

You may want to use regular expressions, and apply them using re.split(), at regex101 you can explore regular expressions on strings.

The answer should be:


This is the tokenized text:

['the', 'bag', 'of', 'words', 'model', 'is', 'a', 'simplifying', 'representation', 'used', 'in', 'natural', 'language', 'processing', 'and', 'information', 'etrieval', 'ir', 'in', 'this', 'model', 'a', 'text', 'such', 'as', 'a', 'sentence', 'or', 'a', 'document', 'is', 'represented', 'as', 'the', 'bag', 'multiset', 'of', 'its', 'words', 'disregarding', 'grammar', 'and', 'even', 'word', 'order', 'but', 'keeping', 'multiplicity']

In [7]:
import re
texto = "The bag-of-words model is a simplifying representation used in natural language processing and information \\
retrieval (IR).  In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of\\
its words, disregarding grammar and even word order but keeping multiplicity"

def tokeniza(string):
    RE = r'\W+' # match any non-word character [^a-zA-Z0-9_], and removes punctuation signs.
    tokens = <COMPLETAR>
    tokens = [t for t in tokens if <COMPLETAR>]
    return tokens

print "This is the tokenized text:\n"
print tokeniza(texto)


This is the tokenized text:

['the', 'bag', 'of', 'words', 'model', 'is', 'a', 'simplifying', 'representation', 'used', 'in', 'natural', 'language', 'processing', 'and', 'information', 'etrieval', 'ir', 'in', 'this', 'model', 'a', 'text', 'such', 'as', 'a', 'sentence', 'or', 'a', 'document', 'is', 'represented', 'as', 'the', 'bag', 'multiset', 'of', 'its', 'words', 'disregarding', 'grammar', 'and', 'even', 'word', 'order', 'but', 'keeping', 'multiplicity']

It is important to eliminate the Stopwords, that are common words that do not contribute much to the meaning of a document (e.g., "the", "a", "is", "to", etc.).

EXERCISE: Load the file "stopwords.txt" and use them to improve the tokeniza function.

The answer should be:


This is the tokenized text without stopwords:

['bag', 'words', 'model', 'simplifying', 'representation', 'used', 'natural', 'language', 'processing', 'information', 'etrieval', 'ir', 'model', 'text', 'sentence', 'document', 'represented', 'bag', 'multiset', 'words', 'disregarding', 'grammar', 'even', 'word', 'order', 'keeping', 'multiplicity']

In [8]:
stopfile = 'data/stopwords.txt'
stopwords = list(set(sc.textFile(stopfile).collect()))
print 'These are the english stopwords:\n'
for sw in stopwords:
    print sw, 
print " \n\n"

def tokeniza(string, stopwords):
    RE = r'\W+' # match any non-word character [^a-zA-Z0-9_]
    tokens = <COMPLETAR>
    tokens = [t for t in tokens if <COMPLETAR>]
    return tokens

print "This is the tokenized text without stopwords:\n"
print tokeniza(texto, stopwords)


These are the english stopwords:

all just being over both through yourselves its before with had should to only under ours has do them his very they not during now him nor did these t each where because doing theirs some are our ourselves out what for below does above between she be we after here hers by on about of against s or own into yourself down your from her whom there been few too themselves was until more himself that but off herself than those he me myself this up will while can were my and then is in am it an as itself at have further their if again no when same any how other which you who most such why a don i having so the yours once  


This is the tokenized text without stopwords:

['bag', 'words', 'model', 'simplifying', 'representation', 'used', 'natural', 'language', 'processing', 'information', 'etrieval', 'ir', 'model', 'text', 'sentence', 'document', 'represented', 'bag', 'multiset', 'words', 'disregarding', 'grammar', 'even', 'word', 'order', 'keeping', 'multiplicity']

Tokenizing the RDDS

We want to modify the structure of the RDDs such that every item has now the format (ID, [list of tokens])

EXERCISE: Use the 'tokeniza' function to tokenize the amazonSmall RDD and count the number of unique tokens in that RDD. Repeat for the other datasets. It is better you define a 'tokenizeRDD' function that takes a RDD as argument (plus the stopwords) and returns a modified RDD.

The answer should be:


These are the first 5 elements of the processed amazonSmallRDD:

('b000jz4hqo', ['clickart', '950', '000', 'premier', 'image', 'pack', 'dvd', 'rom', 'broderbund'])

('b0006zf55o', ['ca', 'international', 'arcserve', 'lap', 'desktop', 'oem', '30pk', 'oem', 'arcserve', 'backup', 'v11', '1', 'win', '30u', 'laptops', 'desktops', 'computer', 'associates'])

('b00004tkvy', ['noah', 'ark', 'activity', 'center', 'jewel', 'case', 'ages', '3', '8', 'victory', 'multimedia'])

('b000g80lqo', ['peachtree', 'sage', 'premium', 'accounting', 'nonprofits', '2007', 'peachtree', 'premium', 'accounting', 'nonprofits', '2007', 'affordable', 'easy', 'use', 'accounting', 'solution', 'provides', 'donor', 'grantor', 'management', 're', 'like', 'nonprofit', 'organizations', 're', 'constantly', 'striving', 'maximize', 'every', 'dollar', 'annual', 'operating', 'budget', 'financial', 'reporting', 'programs', 'funds', 'advanced', 'operational', 'reporting', 'rock', 'solid', 'core', 'accounting', 'features', 'made', 'peachtree', 'choice', 'hundreds', 'thousands', 'small', 'businesses', 'result', 'accounting', 'solution', 'tailor', 'made', 'challenges', 'operating', 'nonprofit', 'organization', 'keep', 'audit', 'trail', 'record', 'report', 'changes', 'made', 'transactions', 'improve', 'data', 'integrity', 'prior', 'period', 'locking', 'archive', 'organization', 'data', 'snap', 'shots', 'data', 'closed', 'year', 'set', 'individual', 'user', 'profiles', 'password', 'protection', 'peachtree', 'restore', 'wizard', 'restores', 'backed', 'data', 'files', 'plus', 'web', 'transactions', 'customized', 'forms', 'includes', 'standard', 'accounting', 'features', 'general', 'ledger', 'accounts', 'receivable', 'accounts', 'payable'])

('b0006se5bq', ['singing', 'coach', 'unlimited', 'singing', 'coach', 'unlimited', 'electronic', 'learning', 'products', 'win', 'nt', '2000', 'xp', 'carry', 'tune', 'technologies'])

In [9]:
def tokenizeRDD(RDD, stopwords):
    tokenizedRDD = (RDD
                         .<COMPLETAR>
                         .cache()
                     )
    return tokenizedRDD

amazonSmallRDDtokenized = tokenizeRDD(amazonSmallRDD, stopwords)
print "Finished with amazonSmallRDD."
amazonRDDtokenized = tokenizeRDD(amazonRDD, stopwords)
print "Finished with amazonRDD."
googleSmallRDDtokenized = tokenizeRDD(googleSmallRDD, stopwords)
print "Finished with googleSmallRDD."
googleRDDtokenized = tokenizeRDD(googleRDD, stopwords)
print "Finished with googleRDD.\n"

print "These are the first 5 elements of the processed amazonSmallRDD:\n"
elements = amazonSmallRDDtokenized.take(5)
for l in elements:
    print l
    print " "


Finished with amazonSmallRDD.
Finished with amazonRDD.
Finished with googleSmallRDD.
Finished with googleRDD.

These are the first 5 elements of the processed amazonSmallRDD:

(u'b000jz4hqo', [u'clickart', u'950', u'000', u'premier', u'image', u'pack', u'dvd', u'rom', u'broderbund'])
 
(u'b0006zf55o', [u'ca', u'international', u'arcserve', u'lap', u'desktop', u'oem', u'30pk', u'oem', u'arcserve', u'backup', u'v11', u'1', u'win', u'30u', u'laptops', u'desktops', u'computer', u'associates'])
 
(u'b00004tkvy', [u'noah', u'ark', u'activity', u'center', u'jewel', u'case', u'ages', u'3', u'8', u'victory', u'multimedia'])
 
(u'b000g80lqo', [u'peachtree', u'sage', u'premium', u'accounting', u'nonprofits', u'2007', u'peachtree', u'premium', u'accounting', u'nonprofits', u'2007', u'affordable', u'easy', u'use', u'accounting', u'solution', u'provides', u'donor', u'grantor', u'management', u're', u'like', u'nonprofit', u'organizations', u're', u'constantly', u'striving', u'maximize', u'every', u'dollar', u'annual', u'operating', u'budget', u'financial', u'reporting', u'programs', u'funds', u'advanced', u'operational', u'reporting', u'rock', u'solid', u'core', u'accounting', u'features', u'made', u'peachtree', u'choice', u'hundreds', u'thousands', u'small', u'businesses', u'result', u'accounting', u'solution', u'tailor', u'made', u'challenges', u'operating', u'nonprofit', u'organization', u'keep', u'audit', u'trail', u'record', u'report', u'changes', u'made', u'transactions', u'improve', u'data', u'integrity', u'prior', u'period', u'locking', u'archive', u'organization', u'data', u'snap', u'shots', u'data', u'closed', u'year', u'set', u'individual', u'user', u'profiles', u'password', u'protection', u'peachtree', u'restore', u'wizard', u'restores', u'backed', u'data', u'files', u'plus', u'web', u'transactions', u'customized', u'forms', u'includes', u'standard', u'accounting', u'features', u'general', u'ledger', u'accounts', u'receivable', u'accounts', u'payable'])
 
(u'b0006se5bq', [u'singing', u'coach', u'unlimited', u'singing', u'coach', u'unlimited', u'electronic', u'learning', u'products', u'win', u'nt', u'2000', u'xp', u'carry', u'tune', u'technologies'])
 

Counting tokens: we want to know the vocabulary size in every one of the datasets.

EXERCISE: Count the number of tokens in every dataset. Also count the number of UNIQUE tokens in every dataset.

The answer should be:


amazonSmall has 14052 tokens.
amazon has 133267 tokens.
googleSmall has 5710 tokens.
google has 98588 tokens.

amazonSmall has 3700 unique tokens.
amazon has 11505 unique tokens.
googleSmall has 2305 unique tokens.
google has 11176 unique tokens.

In [10]:
NtokensamazonSmall = amazonSmallRDDtokenized.<COMPLETAR>
print "amazonSmall has %d tokens." % NtokensamazonSmall

Ntokensamazon = amazonRDDtokenized.<COMPLETAR>
print "amazon has %d tokens." % Ntokensamazon

NtokensgoogleSmall = googleSmallRDDtokenized.<COMPLETAR>
print "googleSmall has %d tokens." % NtokensgoogleSmall

Ntokensgoogle = googleRDDtokenized.<COMPLETAR>
print "google has %d tokens." % Ntokensgoogle

print " "

NtokensuniqueamazonSmall = len(amazonSmallRDDtokenized.<COMPLETAR>
print "amazonSmall has %d unique tokens." % NtokensuniqueamazonSmall

Ntokensuniqueamazon = len(amazonRDDtokenized.<COMPLETAR>
print "amazon has %d unique tokens." % Ntokensuniqueamazon

NtokensuniquegoogleSmall = len(googleSmallRDDtokenized.<COMPLETAR>
print "googleSmall has %d unique tokens." % NtokensuniquegoogleSmall

Ntokensuniquegoogle = len(googleRDDtokenized.<COMPLETAR>
print "google has %d unique tokens." % Ntokensuniquegoogle


amazonSmall has 14052 tokens.
amazon has 133267 tokens.
googleSmall has 5710 tokens.
google has 98588 tokens.
 
amazonSmall has 3700 unique tokens.
amazon has 11505 unique tokens.
googleSmall has 2305 unique tokens.
google has 11176 unique tokens.

Largest and smallest item in amazon dataset.

EXERCISE: Build an 'amazonCountRDD' that stores (ID, [Tokens], Number of tokens). Print the largest content (the record with the largest number of tokens) and the smallest content (the one with the smaller number of tokens).

The answer should be:


amazon smallest item, with length 2, is:
[('b000hlt5g2', ['monopoly', 'encore'], 2)]

amazon largest item, with length 1520, is:
[('b0007lw22q', ['apple', 'ilife', '06', 'family', 'pack', 'mac', 'dvd', 'older', ....

In [11]:
amazonCountRDD = amazonRDDtokenized.map(lambda x: (<COMPLETAR>, <COMPLETAR>, <COMPLETAR>)).cache()

smallestItem = amazonCountRDD.takeOrdered(1, lambda x: x[2])
largestItem = amazonCountRDD.takeOrdered(1, lambda x: -x[2])

print "amazon smallest item, with length %d, is:" % smallestItem[0][2]
print smallestItem
print " "

print "amazon largest item, with length %d, is:" % largestItem[0][2]
print largestItem
print " "


amazon smallest item, with length 2, is:
[(u'b000hlt5g2', [u'monopoly', u'encore'], 2)]
 
amazon largest item, with length 1520, is:
[(u'b0007lw22q', [u'apple', u'ilife', u'06', u'family', u'pack', u'mac', u'dvd', u'older', u'version', u'ilife', u'06', u'easiest', u'way', u'make', u'every', u'bit', u'digital', u'life', u'use', u'mac', u'collect', u'organize', u'edit', u'various', u'elements', u'transform', u'mouth', u'watering', u'masterpieces', u'apple', u'designed', u'templates', u'share', u'magic', u'moments', u'beautiful', u'books', u'colorful', u'calendars', u'dazzling', u'dvds', u'perfect', u'podcasts', u'attractive', u'online', u'journals', u'starring', u'family', u'pack', u'lets', u'install', u'ilife', u'06', u'five', u'apple', u'computers', u'household', u'easier', u'ever', u'edit', u'photos', u'perfection', u'photos', u'one', u'place', u'iphoto', u'6', u'rebuilt', u'blazing', u'performance', u'iphoto', u'makes', u'sharing', u'photos', u'faster', u'simpler', u'cooler', u'ever', u'also', u'adds', u'eye', u'opening', u'features', u'ones', u'already', u'love', u'including', u'photocasting', u'support', u'250', u'000', u'photos', u'easy', u'publishing', u'web', u'special', u'effects', u'new', u'custom', u'cards', u'calendars', u'spread', u'smiles', u'far', u'wide', u'lifetime', u'photos', u'fingertips', u'life', u'one', u'big', u'photo', u'opportunity', u'explains', u'photo', u'library', u'getting', u'bigger', u'every', u'day', u'well', u'good', u'news', u'iphoto', u'supports', u'250', u'000', u'images', u'means', u'confidently', u'shoot', u'thousand', u'photos', u'per', u'week', u'next', u'20', u'years', u'better', u'get', u'cracking', u'navigating', u'larger', u'library', u'breeze', u'move', u'scroll', u'bar', u'new', u'see', u'scroll', u'guide', u'appears', u'show', u'photos', u'rolls', u'currently', u'displayed', u'photocasting', u'fantastic', u'new', u'way', u'share', u'imagine', u'sending', u'album', u'favorite', u'photos', u'family', u'change', u'automatically', u'computers', u'update', u'photocasting', u'amazingly', u'easy', u'create', u'photocast', u'album', u'iphoto', u'publish', u'mac', u'password', u'protected', u'wish', u'cousin', u'cindy', u'subscribes', u'll', u'see', u'iphoto', u'library', u'see', u'full', u'resolution', u'photos', u'used', u'desktop', u'pictures', u'printed', u'albums', u'cards', u'calendars', u'whenever', u'add', u'subtract', u'photocast', u'album', u'change', u'even', u'people', u'pcs', u'enjoy', u'iphoto', u'photocast', u'takes', u'rss', u'news', u'reader', u'subscribe', u'dazzle', u'calendars', u'greeting', u'cards', u'books', u'send', u'beautiful', u'printed', u'book', u'special', u'photos', u'friend', u'put', u'kid', u'happy', u'face', u'cover', u'custom', u'card', u'announce', u'birthday', u'party', u'create', u'stylized', u'personalized', u'calendar', u'rivals', u'ones', u'see', u'local', u'mall', u'make', u'things', u'many', u'many', u'typical', u'iphoto', u'ease', u'choose', u'photo', u'album', u'click', u'apple', u'designed', u'template', u'let', u'iphoto', u'work', u'magic', u'add', u'captions', u'fine', u'tune', u'layout', u'click', u'order', u'professionally', u'printed', u'masterwork', u'delivered', u'right', u'door', u'website', u'life', u'one', u'thing', u'share', u'photos', u'internet', u'something', u'else', u'share', u'wonderfully', u'designed', u'personal', u'website', u'thanks', u'newest', u'member', u'ilife', u'family', u'iweb', u'create', u'beautiful', u'new', u'photo', u'page', u'website', u'minutes', u'heck', u'type', u'clever', u'captions', u'quickly', u'enough', u'maybe', u'even', u'seconds', u'simply', u'export', u'iphoto', u'album', u'iweb', u'create', u'new', u'photo', u'grid', u'drag', u'photos', u'rearrange', u'needed', u'add', u'pithy', u'words', u'publish', u'whoosh', u'film', u'way', u'done', u'easily', u'imovie', u'hd', u'6', u'score', u'movies', u'powerful', u'audio', u'tools', u'imovie', u'hd', u'6', u'riveting', u'performances', u'major', u'effects', u'inspired', u'directing', u'amazing', u'imovie', u'changed', u'way', u'people', u'look', u'home', u'movies', u'cinematic', u'tools', u'offer', u'imovie', u'makes', u'even', u'comfortable', u'director', u'chair', u'editor', u'chair', u'special', u'effects', u'guy', u'chair', u'well', u'get', u'idea', u'imovie', u'fastest', u'easiest', u'way', u'turn', u'home', u'movies', u'dazzling', u'hollywood', u'style', u'hits', u'instant', u'theme', u'instant', u'classic', u'imovie', u'themes', u'give', u'moviemaking', u'power', u'never', u'imagined', u'click', u'one', u'fun', u'begins', u'theme', u'contains', u'collection', u'professionally', u'designed', u'scenes', u'give', u'movie', u'personality', u'start', u'finish', u'including', u'video', u'graphic', u'overlays', u'advanced', u'transitions', u'drag', u'drop', u'movie', u'clips', u'photos', u'scene', u'drop', u'zones', u'type', u'titles', u'imovie', u'rest', u'get', u'quality', u'feature', u'film', u'without', u'cost', u'overruns', u'showing', u'major', u'video', u'effects', u'll', u'love', u'new', u'video', u'effects', u'made', u'possible', u'mac', u'os', u'x', u'core', u'video', u'll', u'also', u'love', u'fact', u'preview', u'results', u'video', u'effect', u'choices', u'real', u'time', u'size', u'right', u'main', u'window', u'imovie', u'rendering', u'background', u'experiment', u'heart', u'content', u'dive', u'right', u'creative', u'options', u'without', u'delay', u'sounds', u'cinema', u'great', u'movies', u'sound', u'amazing', u'look', u'imovie', u'comes', u'complete', u'sound', u'studio', u'built', u'summoning', u'power', u'mac', u'os', u'x', u'core', u'audio', u'imovie', u'offers', u'eight', u'new', u'audio', u'effects', u'including', u'noise', u'reduction', u'perfect', u'squelching', u'noise', u'common', u'home', u'movies', u'reverb', u'pitch', u'change', u'handy', u'graphic', u'equalizer', u'ilife', u'media', u'browser', u'full', u'access', u'original', u'garageband', u'songs', u'sound', u'video', u'podcasts', u'blogs', u'iweb', u'use', u'imovie', u'share', u'friends', u'family', u'world', u'imovie', u'working', u'hand', u'hand', u'iweb', u'makes', u'easy', u'publish', u'video', u'websites', u'blogs', u'even', u'use', u'imovie', u'create', u'video', u'podcasts', u'complete', u'chapter', u'markers', u'live', u'urls', u'using', u'iweb', u'submit', u'video', u'podcast', u'itunes', u'podcast', u'directory', u'seen', u'subscribed', u'everyone', u'simple', u'powerful', u'idvd', u'interface', u'choose', u'stunning', u'menu', u'templates', u'idvd', u'6', u'rent', u'someone', u'else', u'masterpiece', u'create', u'hollywood', u'style', u'home', u'movies', u'multimedia', u'wedding', u'albums', u'professional', u'slideshow', u'portfolios', u'idvd', u'6', u'helps', u'put', u'dvd', u'ordinary', u'dvd', u'mind', u'jaw', u'dropping', u'widescreen', u'dvd', u'coordinated', u'menus', u'ambient', u'audio', u'dvd', u'thoroughly', u'professional', u'polish', u'dvd', u'captivating', u'make', u'onto', u'everyone', u'must', u'see', u'list', u'magic', u'idvd', u'idvd', u'always', u'made', u'easy', u'create', u'beautifully', u'designed', u'dvds', u'beyond', u'easy', u'magic', u'idvd', u'feature', u'choose', u'theme', u'select', u'movies', u'photos', u'want', u'include', u'idvd', u'automatically', u'creates', u'complete', u'dvd', u'unified', u'design', u'start', u'finish', u'including', u'menu', u'screens', u'movies', u'chapter', u'menus', u'slideshows', u'see', u'believe', u'want', u'join', u'magic', u'use', u'magic', u'idvd', u'starting', u'point', u'edit', u'create', u'widescreen', u'dvds', u'get', u'gorgeous', u'new', u'widescreen', u'tv', u'idvd', u'6', u'author', u'dvds', u'movies', u'photo', u'slideshows', u'stunning', u'widescreen', u'format', u'even', u'include', u'content', u'sd', u'hd', u'video', u'sources', u'idvd', u'converts', u'everything', u'automatically', u'dvds', u'play', u'back', u'beautifully', u'choose', u'new', u'idvd', u'themes', u'idvd', u'6', u'features', u'10', u'new', u'apple', u'designed', u'themes', u'idvd', u'five', u'made', u'match', u'imovie', u'themes', u'available', u'widescreen', u'16', u'9', u'standard', u'4', u'3', u'formats', u'theme', u'includes', u'family', u'three', u'coordinated', u'menus', u'main', u'menu', u'chapter', u'menu', u'scene', u'selection', u'extras', u'menu', u'slideshows', u'content', u'talent', u'burn', u're', u'ready', u'burn', u'dvd', u'idvd', u'ready', u'even', u're', u'using', u'third', u'party', u'dvd', u'burner', u'idvd', u'built', u'support', u'wide', u'variety', u'dvd', u'media', u'formats', u'including', u'dvd', u'r', u'dvd', u'rw', u'dvd', u'r', u'dvd', u'rw', u'dvd', u'r', u'dl', u'options', u'ever', u'garageband', u'lets', u'make', u'music', u'like', u'pro', u'mac', u'create', u'podcasts', u'make', u'sound', u'like', u'professional', u'host', u'score', u'imovie', u'creations', u'garageband', u'3', u'best', u'way', u'record', u'music', u'mac', u'best', u'way', u'record', u'podcasts', u'podcasting', u'garageband', u'3', u'puts', u'control', u'room', u'full', u'featured', u'radio', u'station', u'new', u'iweb', u'integration', u'gets', u'voice', u'internet', u'minutes', u'podcast', u'artwork', u'track', u'add', u'podcast', u'artwork', u'track', u'dragging', u'images', u'ilife', u'media', u'browser', u'drag', u'title', u'card', u'name', u'podcast', u'picture', u'drag', u'different', u'images', u'chapter', u'marker', u'podcast', u'listeners', u'also', u'see', u'visual', u'cues', u'position', u'images', u'artwork', u'track', u'correspond', u'vocal', u'track', u'raving', u'amazing', u'unsigned', u'band', u'saw', u'last', u'night', u'drag', u'photo', u'gig', u'right', u'iphoto', u'library', u'sound', u'effects', u'jingles', u'give', u'podcast', u'professional', u'polish', u'adding', u'sound', u'effects', u'jingles', u'garageband', u'library', u'200', u'podcast', u'sounds', u'browse', u'200', u'sound', u'effects', u'including', u'radio', u'style', u'stingers', u'sounds', u'people', u'animals', u'machines', u'drag', u'podcast', u'sync', u'vocal', u'track', u'add', u'musical', u'accompaniment', u'podcast', u'browse', u'garageband', u'library', u'100', u'jingles', u'drag', u'podcast', u'7', u'15', u'30', u'second', u'snippets', u'add', u'sound', u'effects', u'jingles', u'post', u'production', u'trigger', u'live', u'recording', u'podcast', u'radio', u'engineer', u'garageband', u'3', u'includes', u'features', u'function', u'like', u'personal', u'podcast', u'radio', u'engineer', u'create', u'podcasts', u'make', u'sound', u'like', u'professional', u'host', u'built', u'speech', u'enhancer', u'optimizes', u'sound', u'mac', u'gender', u'vocal', u'range', u'improving', u'sound', u'voice', u'simulating', u'professional', u'microphone', u'even', u're', u'using', u'one', u'dynamic', u'ducking', u'effect', u'automatically', u'reduces', u'music', u'volume', u'speak', u'listeners', u'always', u'hear', u'talk', u'tunes', u'ichat', u'interview', u'recording', u'use', u'ichat', u'garageband', u'record', u'talk', u'show', u'style', u'podcast', u'time', u'takes', u'carry', u'friendly', u'chat', u'even', u'guests', u'side', u'world', u'garageband', u'lets', u'record', u'remote', u'guests', u'audio', u'video', u'ichat', u'conference', u'start', u'chatting', u'garageband', u'simultaneously', u'records', u'audio', u'one', u'track', u'guest', u'complete', u'buddy', u'name', u'icon', u'know', u'saying', u're', u'using', u'isight', u'cameras', u'record', u'action', u'garageband', u'takes', u'photo', u'snapshot', u'guest', u'every', u'time', u'speaks', u'one', u'click', u'iweb', u'publishing', u'finished', u'recording', u'podcast', u'time', u'get', u'internet', u'garageband', u'iweb', u'garageband', u'send', u'podcast', u'iweb', u'create', u'new', u'podcast', u'series', u'add', u'existing', u'series', u'click', u'publish', u'get', u'podcast', u'web', u'via', u'mac', u'account', u'easy', u'iweb', u'even', u'lets', u'submit', u'podcast', u'itunes', u'music', u'store', u'attract', u'new', u'fans', u'imovie', u'scoring', u'new', u'video', u'track', u'garageband', u'makes', u'easy', u'add', u'original', u'music', u'score', u'movies', u'worry', u'musical', u'talent', u'lack', u'thereof', u'use', u'garageband', u'included', u'loops', u'try', u'combination', u'loops', u'software', u'instruments', u'previous', u'audio', u'recordings', u'created', u'even', u'use', u'garageband', u'add', u'cinematic', u'sound', u'effects', u'footsteps', u'creaking', u'doors', u're', u'ready', u'save', u'scored', u'imovie', u'project', u'quicktime', u'video', u'send', u'idvd', u'burning', u'publish', u'internet', u'via', u'iweb', u'internet', u'calling', u'answer', u'iweb', u'choose', u'beautiful', u'website', u'templates', u'publish', u'blog', u'quickly', u'easily', u'iweb', u'internet', u'calling', u'answer', u'use', u'iweb', u'create', u'websites', u'blogs', u'complete', u'podcasts', u'photos', u'movies', u'get', u'online', u'fast', u'drag', u'drop', u'design', u'using', u'choice', u'web', u'templates', u'publish', u'live', u'mac', u'account', u'apple', u'designed', u'templates', u'let', u'iweb', u'help', u'build', u'beautiful', u'website', u'minutes', u'using', u'apple', u'designed', u'templates', u'choose', u'website', u'theme', u'fits', u'style', u'theme', u'offers', u'page', u'templates', u'photo', u'album', u'blog', u'podcast', u'movie', u'pages', u'll', u'always', u'perfect', u'place', u'content', u'use', u'ilife', u'media', u'browser', u'drag', u'photos', u'movies', u'podcasts', u'simply', u'type', u'placeholder', u'text', u'page', u'template', u'click', u'publish', u'mac', u'ilife', u'media', u'browser', u'every', u'website', u'needs', u'content', u'podcast', u'page', u'needs', u'audio', u'photo', u'page', u'needs', u'images', u'blog', u'needs', u'links', u'favorite', u'music', u'iweb', u'needs', u'ilife', u'media', u'browser', u'using', u'media', u'browser', u'access', u'ilife', u'content', u'photos', u'video', u'audio', u'without', u'leaving', u'iweb', u'drag', u'podcast', u'song', u'recorded', u'garageband', u'earlier', u'today', u'iphoto', u'album', u'vacation', u'latest', u'imovie', u'project', u'whatever', u'want', u'share', u'll', u'find', u'ilife', u'media', u'browser', u'blogging', u'use', u'iweb', u'start', u'weblog', u'add', u'new', u'entries', u'easily', u'writing', u'email', u'choose', u'blog', u'template', u'type', u'text', u'drag', u'photos', u'ilife', u'media', u'browser', u'iweb', u'takes', u'care', u'everything', u'else', u'setting', u'navigation', u'blog', u'creating', u'summary', u'page', u'adding', u'entry', u'archive', u'iweb', u'also', u'handles', u'rss', u'feed', u'blog', u'anyone', u'subscribe', u're', u'done', u'adding', u'entry', u'one', u'click', u'publishes', u'blog', u'via', u'mac', u'podcasting', u'comes', u'time', u'take', u'podcasts', u'live', u'iweb', u'gives', u'simple', u'stylish', u'way', u'either', u'send', u'podcast', u'iweb', u'garageband', u'start', u'iweb', u'podcast', u'page', u'template', u'drag', u'podcast', u'ilife', u'media', u'browser', u'type', u'placeholder', u'text', u'add', u'brief', u'description', u'podcast', u'click', u'publish', u'internet', u'using', u'mac', u'account', u'iweb', u'takes', u'care', u'rss', u'feed', u'podcast', u'lets', u'submit', u'podcasts', u'itunes', u'music', u'store', u'anyone', u'listen', u'subscribe', u'one', u'click', u'mac', u'publishing', u'sharing', u'website', u'world', u'one', u'click', u'simple', u'iweb', u'mac', u'membership', u'publish', u'entire', u'website', u'complete', u'blog', u'entries', u'photo', u'albums', u'links', u'photocasts', u'movies', u'podcasts', u'internet', u'single', u'click', u'configuration', u'hassle', u'click', u'publish', u'iweb', u'automatically', u'publishes', u'entire', u'site', u'internet', u'anyone', u'web', u'browser', u'see', u'iweb', u'even', u'lets', u'announce', u'website', u'via', u'email', u'friends', u'family', u'stay', u'loop', u'ilife', u'06', u'family', u'pack', u'information', u'family', u'pack', u'software', u'license', u'agreement', u'allows', u'install', u'use', u'one', u'copy', u'apple', u'software', u'maximum', u'five', u'apple', u'labeled', u'computers', u'time', u'long', u'computers', u'located', u'household', u'used', u'persons', u'occupy', u'household', u'household', u'apple', u'means', u'person', u'persons', u'sharing', u'housing', u'unit', u'home', u'apartment', u'mobile', u'home', u'condominium', u'license', u'extend', u'students', u'reside', u'separate', u'campus', u'location', u'business', u'commercial', u'users', u'apple', u'computer'], 1520)]
 

Transforming the list of tokens into a sparse structure (dictionary). This will allow us to efficiently compute scalar products between sparse vectors.

EXERCISE: Complete the definition of the "compute_tf" function that takes as input a list of tokens and produces a dictionary {key: value}, one key for every unique element in the list of tokens. The value for every token is the number of times that tokens appears in the list, divided by the total number of tokens. This measurement is know as TF, term frequency.

The answer should be:


{'sentence': 0.037037037037037035, 'text': 0.037037037037037035, 'ir': 0.037037037037037035, 'multiset': 0.037037037037037035, 'even': 0.037037037037037035, 'information': 0.037037037037037035, 'document': 0.037037037037037035, 'used': 0.037037037037037035, 'processing': 0.037037037037037035, 'grammar': 0.037037037037037035, 'words': 0.07407407407407407, 'represented': 0.037037037037037035, 'word': 0.037037037037037035, 'etrieval': 0.037037037037037035, 'keeping': 0.037037037037037035, 'natural': 0.037037037037037035, 'language': 0.037037037037037035, 'multiplicity': 0.037037037037037035, 'disregarding': 0.037037037037037035, 'bag': 0.07407407407407407, 'simplifying': 0.037037037037037035, 'representation': 0.037037037037037035, 'model': 0.07407407407407407, 'order': 0.037037037037037035}

In [12]:
# TODO: Replace <FILL IN> with appropriate code
def compute_tf(tokens):
    tf_dict = {}
    for tf in tokens:
        <COMPLETAR>
            
    Total = len(<COMPLETAR>)
    
    for key in tf_dict.keys():
        <COMPLETAR> 
    return tf_dict

print compute_tf(tokeniza(texto, stopwords))


{'sentence': 0.037037037037037035, 'text': 0.037037037037037035, 'ir': 0.037037037037037035, 'multiset': 0.037037037037037035, 'even': 0.037037037037037035, 'information': 0.037037037037037035, 'document': 0.037037037037037035, 'used': 0.037037037037037035, 'processing': 0.037037037037037035, 'grammar': 0.037037037037037035, 'words': 0.07407407407407407, 'represented': 0.037037037037037035, 'word': 0.037037037037037035, 'etrieval': 0.037037037037037035, 'keeping': 0.037037037037037035, 'natural': 0.037037037037037035, 'language': 0.037037037037037035, 'multiplicity': 0.037037037037037035, 'disregarding': 0.037037037037037035, 'bag': 0.07407407407407407, 'simplifying': 0.037037037037037035, 'representation': 0.037037037037037035, 'model': 0.07407407407407407, 'order': 0.037037037037037035}

Let us implement now the cosine similarity function between two items using the TF measurement. It is defined as:

$$ cosim(a,b) = \frac{a \cdot b}{\|a\| \|b\|} = \frac{\sum a_i b_i}{\sqrt{\sum a_i^2} \sqrt{\sum b_i^2}} $$

EXERCISE: We observe in the formula that we need a dot product function (between two sparse dictionaries), and a norm function. Complete the definition of the "dotproduct" and "norm" functions below.

The answer should be:


The dot product between tf_dict1 and tf_dict2 is 0.055556

The norm of tf_dict1 is 0.212762

The norm of tf_dict2 is 0.408248

The cosim of a vector with itself must be one: 1.000000

The cosim of a vector with itself must be one: 1.000000

The cosim between two vectors does not depend of the order: 0.639602 = 0.639602

In [13]:
import math

def dotprod(tf_dict1, tf_dict2):
    <COMPLETAR>
    return dotProd

def norm(tf_dict1):
    <COMPLETAR>
    return norma

def cosim(tf_dict1, tf_dict2):
    <COMPLETAR>
    return cs


tf_dict1 = compute_tf(tokeniza(texto, stopwords))
tf_dict2 = compute_tf(tokeniza(texto[0:60], stopwords))

print "The dot product between tf_dict1 and tf_dict2 is %f\n" % dotprod(tf_dict1, tf_dict2) 
print "The norm of tf_dict1 is %f\n" % norm(tf_dict1) 
print "The norm of tf_dict2 is %f\n" % norm(tf_dict2) 
print "The cosim of a vector with itself must be one: %f\n" % cosim(tf_dict1, tf_dict1) 
print "The cosim of a vector with itself must be one: %f\n" % cosim(tf_dict2, tf_dict2) 
print "The cosim between two vectors does not depend of the order: %f = %f\n" % (cosim(tf_dict1, tf_dict2) , cosim(tf_dict2, tf_dict1))


The dot product between tf_dict1 and tf_dict2 is 0.055556

The norm of tf_dict1 is 0.212762

The norm of tf_dict2 is 0.408248

The cosim of a vector with itself must be one: 1.000000

The cosim of a vector with itself must be one: 1.000000

The cosim between two vectors does not depend of the order: 0.639602 = 0.639602

We will use these functions to process the datasets and find the two most similar records between amazon and google.

EXERCISE: Generate a new "allPairsRDD" that contains all possible combinations between elements of googleSmallRDDtokenized and amazonSmallRDDtokenized. You may want to use the "cartesian" transformation. Transform "allPairsRDD" to obtain a new RDD with the format (googleID, amazonID, cosim). Finally, print the contents with the largest cosine similarity.

The answer should be:


The allPairsRDD has 40000 elements.

This is one of the elements in allPairsRDD:

[(('http://www.google.com/base/feeds/snippets/11448761432933644608', ['spanish', 'vocabulary', 'builder', 'expand', 'vocabulary', 'contains', 'fun', 'lessons', 'teach', 'entertain', 'll', 'quickly', 'find', 'mastering', 'new', 'terms', 'includes', 'games']), ('b000jz4hqo', ['clickart', '950', '000', 'premier', 'image', 'pack', 'dvd', 'rom', 'broderbund']))]


In [14]:
allPairsRDD = (<COMPLETAR>
              .cartesian(<COMPLETAR>)
              .cache())

print "The allPairsRDD has %d elements.\n" % allPairsRDD.count()

print "This is one of the elements in allPairsRDD:\n"
print allPairsRDD.take(1)


The allPairsRDD has 40000 elements.

This is one of the elements in allPairsRDD:

[((u'http://www.google.com/base/feeds/snippets/11448761432933644608', [u'spanish', u'vocabulary', u'builder', u'expand', u'vocabulary', u'contains', u'fun', u'lessons', u'teach', u'entertain', u'll', u'quickly', u'find', u'mastering', u'new', u'terms', u'includes', u'games']), (u'b000jz4hqo', [u'clickart', u'950', u'000', u'premier', u'image', u'pack', u'dvd', u'rom', u'broderbund']))]

EXERCISE: Transform "allPairsRDD" to obtain a new "cosimRDD" with the format (googleID, amazonID, cosim).

The answer should be:


This is the first element in cosimRDD:

[('http://www.google.com/base/feeds/snippets/11448761432933644608', 'b000jz4hqo', 0.0)]

In [15]:
cosimRDD = (allPairsRDD
              .map(lambda x: (<COMPLETAR>, <COMPLETAR>, <COMPLETAR>))
              .cache())

print "This is the first element in cosimRDD:\n"
print cosimRDD.take(1)


This is the first element in cosimRDD:

[(u'http://www.google.com/base/feeds/snippets/11448761432933644608', u'b000jz4hqo', 0.0)]

EXERCISE: Now, find the element with the largest similarity.

The answer should be:


This is the element in cosimRDD with the largest similarity:

[('http://www.google.com/base/feeds/snippets/18411875162562199123', 'b000j4k804', 0.9712858623572642)]

In [16]:
mostSimilarPair = (cosimRDD
              .takeOrdered(1, lambda x: <COMPLETAR>)
              )

print "This is the element in cosimRDD with the largest similarity:\n"
print mostSimilarPair


This is the element in cosimRDD with the largest similarity:

[(u'http://www.google.com/base/feeds/snippets/18411875162562199123', u'b000j4k804', 0.9712858623572642)]

EXERCISE: As a final step, we will print the contents in amazon and google corresponding to that element, to check if they are similar or not.

The answer should be:



The google ID is: http://www.google.com/base/feeds/snippets/18411875162562199123

The amazon ID is: b000j4k804

The google content is: topics entertainment 40248 instant immersion spanish audio book audio book "instant immersion spanish (audio book) (audio book)" "topics entertainment"

The amazon content is: instant immersion spanish (audio book) "instant immersion spanish (audio book) (audio book)" "topics entertainment"


In [17]:
googleID = <COMPLETAR>
print "The google ID is: %s\n" % googleID
amazonID = <COMPLETAR>
print "The amazon ID is: %s\n" % amazonID

googleContent = (googleSmallRDD
                         .map(lambda x: x.split(';'))
                         .filter(lambda x: <COMPLETAR> == googleID)
                         .map(lambda x: x[1])
                         .first()
                )

amazonContent = (amazonSmallRDD
                         .map(lambda x: x.split(';'))
                         .filter(lambda x: <COMPLETAR> == amazonID)
                         .map(lambda x: x[1])
                         .first()
                )

print "The google content is:\n"
print googleContent
print "\nThe amazon content is:\n"
print amazonContent


The google ID is: http://www.google.com/base/feeds/snippets/18411875162562199123

The amazon ID is: b000j4k804

The google content is:

topics entertainment 40248 instant immersion spanish audio book audio book "instant immersion spanish (audio book) (audio book)" "topics entertainment"

The amazon content is:

instant immersion spanish (audio book) "instant immersion spanish (audio book) (audio book)" "topics entertainment"

EXERCISE: Repeat the computations with the full datasets. Open in a new browser the page localhost:4040 to see the evolution of the tasks.

The answer should be:


The allPairsRDD has 4397038 elements.

This is one of the elements in allPairsRDD:

[(('http://www.google.com/base/feeds/snippets/11125907881740407428', ['learning', 'quickbooks', '2007', 'learning', 'quickbooks', '2007', 'intuit']), ('b000jz4hqo', ['clickart', '950', '000', 'premier', 'image', 'pack', 'dvd', 'rom', 'broderbund']))]

This is the first element in cosimRDD:

[('http://www.google.com/base/feeds/snippets/11125907881740407428', 'b000jz4hqo', 0.0)]

This is the element in cosimRDD with the largest similarity:

[('http://www.google.com/base/feeds/snippets/17521446718236049500', 'b000v9yxj4', 1.0)]

The google ID is: http://www.google.com/base/feeds/snippets/17521446718236049500

The amazon ID is: b000v9yxj4

The google content is: nero inc nero 8 ultra edition  

The amazon content is: nero 8 ultra edition  "nero inc."

In [18]:
allPairsRDD = (<COMPLETAR>
              .cache())

print "The allPairsRDD has %d elements.\n" % allPairsRDD.count()

print "This is one of the elements in allPairsRDD:\n"
print allPairsRDD.take(1)

cosimRDD = (<COMPLETAR>
              .cache())

print "\nThis is the first element in cosimRDD:\n"
print cosimRDD.take(1)

mostSimilarPair = (cosimRDD
              .<COMPLETAR>
              )

print "\nThis is the element in cosimRDD with the largest similarity:\n"
print mostSimilarPair

googleID = <COMPLETAR>
print "The google ID is: %s\n" % googleID
amazonID = <COMPLETAR>
print "The amazon ID is: %s\n" % amazonID

googleContent = (googleRDD
                         .<COMPLETAR>
                )

amazonContent = (amazonRDD
                         .<COMPLETAR>
                )

print "The google content is: %s\n" % googleContent
print "The amazon content is: %s\n" % amazonContent


The allPairsRDD has 4397038 elements.

This is one of the elements in allPairsRDD:

[((u'http://www.google.com/base/feeds/snippets/11125907881740407428', [u'learning', u'quickbooks', u'2007', u'learning', u'quickbooks', u'2007', u'intuit']), (u'b000jz4hqo', [u'clickart', u'950', u'000', u'premier', u'image', u'pack', u'dvd', u'rom', u'broderbund']))]

This is the first element in cosimRDD:

[(u'http://www.google.com/base/feeds/snippets/11125907881740407428', u'b000jz4hqo', 0.0)]

This is the element in cosimRDD with the largest similarity:

[(u'http://www.google.com/base/feeds/snippets/17521446718236049500', u'b000v9yxj4', 1.0)]
The google ID is: http://www.google.com/base/feeds/snippets/17521446718236049500

The amazon ID is: b000v9yxj4

The google content is: nero inc nero 8 ultra edition  

The amazon content is: nero 8 ultra edition  "nero inc."


In [ ]: