Doc2Vec Tutorial on the Lee Dataset


In [1]:
import gensim
import os
import collections
import smart_open
import random

What is it?

Doc2Vec is an NLP tool for representing documents as a vector and is a generalizing of the Word2Vec method. This tutorial will serve as an introduction to Doc2Vec and present ways to train and assess a Doc2Vec model.

Getting Started

To get going, we'll need to have a set of documents to train our doc2vec model. In theory, a document could be anything from a short 140 character tweet, a single paragraph (i.e., journal article abstract), a news article, or a book. In NLP parlance a collection or set of documents is often referred to as a corpus.

For this tutorial, we'll be training our model using the Lee Background Corpus included in gensim. This corpus contains 314 documents selected from the Australian Broadcasting Corporation’s news mail service, which provides text e-mails of headline stories and covers a number of broad topics.

And we'll test our model by eye using the much shorter Lee Corpus which contains 50 documents.


In [2]:
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data'])
lee_train_file = test_data_dir + os.sep + 'lee_background.cor'
lee_test_file = test_data_dir + os.sep + 'lee.cor'

Define a Function to Read and Preprocess Text

Below, we define a function to open the train/test file (with latin encoding), read the file line-by-line, pre-process each line using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Note that, for a given file (aka corpus), each continuous line constitutes a single document and the length of each line (i.e., document) can vary. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number.


In [3]:
def read_corpus(fname, tokens_only=False):
    with smart_open.smart_open(fname, encoding="iso-8859-1") as f:
        for i, line in enumerate(f):
            if tokens_only:
                yield gensim.utils.simple_preprocess(line)
            else:
                # For training data, add tags
                yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(line), [i])

In [4]:
train_corpus = list(read_corpus(lee_train_file))
test_corpus = list(read_corpus(lee_test_file, tokens_only=True))

Let's take a look at the training corpus


In [5]:
train_corpus[:2]


Out[5]:
[TaggedDocument(words=['hundreds', 'of', 'people', 'have', 'been', 'forced', 'to', 'vacate', 'their', 'homes', 'in', 'the', 'southern', 'highlands', 'of', 'new', 'south', 'wales', 'as', 'strong', 'winds', 'today', 'pushed', 'huge', 'bushfire', 'towards', 'the', 'town', 'of', 'hill', 'top', 'new', 'blaze', 'near', 'goulburn', 'south', 'west', 'of', 'sydney', 'has', 'forced', 'the', 'closure', 'of', 'the', 'hume', 'highway', 'at', 'about', 'pm', 'aedt', 'marked', 'deterioration', 'in', 'the', 'weather', 'as', 'storm', 'cell', 'moved', 'east', 'across', 'the', 'blue', 'mountains', 'forced', 'authorities', 'to', 'make', 'decision', 'to', 'evacuate', 'people', 'from', 'homes', 'in', 'outlying', 'streets', 'at', 'hill', 'top', 'in', 'the', 'new', 'south', 'wales', 'southern', 'highlands', 'an', 'estimated', 'residents', 'have', 'left', 'their', 'homes', 'for', 'nearby', 'mittagong', 'the', 'new', 'south', 'wales', 'rural', 'fire', 'service', 'says', 'the', 'weather', 'conditions', 'which', 'caused', 'the', 'fire', 'to', 'burn', 'in', 'finger', 'formation', 'have', 'now', 'eased', 'and', 'about', 'fire', 'units', 'in', 'and', 'around', 'hill', 'top', 'are', 'optimistic', 'of', 'defending', 'all', 'properties', 'as', 'more', 'than', 'blazes', 'burn', 'on', 'new', 'year', 'eve', 'in', 'new', 'south', 'wales', 'fire', 'crews', 'have', 'been', 'called', 'to', 'new', 'fire', 'at', 'gunning', 'south', 'of', 'goulburn', 'while', 'few', 'details', 'are', 'available', 'at', 'this', 'stage', 'fire', 'authorities', 'says', 'it', 'has', 'closed', 'the', 'hume', 'highway', 'in', 'both', 'directions', 'meanwhile', 'new', 'fire', 'in', 'sydney', 'west', 'is', 'no', 'longer', 'threatening', 'properties', 'in', 'the', 'cranebrook', 'area', 'rain', 'has', 'fallen', 'in', 'some', 'parts', 'of', 'the', 'illawarra', 'sydney', 'the', 'hunter', 'valley', 'and', 'the', 'north', 'coast', 'but', 'the', 'bureau', 'of', 'meteorology', 'claire', 'richards', 'says', 'the', 'rain', 'has', 'done', 'little', 'to', 'ease', 'any', 'of', 'the', 'hundred', 'fires', 'still', 'burning', 'across', 'the', 'state', 'the', 'falls', 'have', 'been', 'quite', 'isolated', 'in', 'those', 'areas', 'and', 'generally', 'the', 'falls', 'have', 'been', 'less', 'than', 'about', 'five', 'millimetres', 'she', 'said', 'in', 'some', 'places', 'really', 'not', 'significant', 'at', 'all', 'less', 'than', 'millimetre', 'so', 'there', 'hasn', 'been', 'much', 'relief', 'as', 'far', 'as', 'rain', 'is', 'concerned', 'in', 'fact', 'they', 've', 'probably', 'hampered', 'the', 'efforts', 'of', 'the', 'firefighters', 'more', 'because', 'of', 'the', 'wind', 'gusts', 'that', 'are', 'associated', 'with', 'those', 'thunderstorms'], tags=[0]),
 TaggedDocument(words=['indian', 'security', 'forces', 'have', 'shot', 'dead', 'eight', 'suspected', 'militants', 'in', 'night', 'long', 'encounter', 'in', 'southern', 'kashmir', 'the', 'shootout', 'took', 'place', 'at', 'dora', 'village', 'some', 'kilometers', 'south', 'of', 'the', 'kashmiri', 'summer', 'capital', 'srinagar', 'the', 'deaths', 'came', 'as', 'pakistani', 'police', 'arrested', 'more', 'than', 'two', 'dozen', 'militants', 'from', 'extremist', 'groups', 'accused', 'of', 'staging', 'an', 'attack', 'on', 'india', 'parliament', 'india', 'has', 'accused', 'pakistan', 'based', 'lashkar', 'taiba', 'and', 'jaish', 'mohammad', 'of', 'carrying', 'out', 'the', 'attack', 'on', 'december', 'at', 'the', 'behest', 'of', 'pakistani', 'military', 'intelligence', 'military', 'tensions', 'have', 'soared', 'since', 'the', 'raid', 'with', 'both', 'sides', 'massing', 'troops', 'along', 'their', 'border', 'and', 'trading', 'tit', 'for', 'tat', 'diplomatic', 'sanctions', 'yesterday', 'pakistan', 'announced', 'it', 'had', 'arrested', 'lashkar', 'taiba', 'chief', 'hafiz', 'mohammed', 'saeed', 'police', 'in', 'karachi', 'say', 'it', 'is', 'likely', 'more', 'raids', 'will', 'be', 'launched', 'against', 'the', 'two', 'groups', 'as', 'well', 'as', 'other', 'militant', 'organisations', 'accused', 'of', 'targetting', 'india', 'military', 'tensions', 'between', 'india', 'and', 'pakistan', 'have', 'escalated', 'to', 'level', 'not', 'seen', 'since', 'their', 'war'], tags=[1])]

And the testing corpus looks like this:


In [6]:
print(test_corpus[:2])


[['the', 'national', 'executive', 'of', 'the', 'strife', 'torn', 'democrats', 'last', 'night', 'appointed', 'little', 'known', 'west', 'australian', 'senator', 'brian', 'greig', 'as', 'interim', 'leader', 'shock', 'move', 'likely', 'to', 'provoke', 'further', 'conflict', 'between', 'the', 'party', 'senators', 'and', 'its', 'organisation', 'in', 'move', 'to', 'reassert', 'control', 'over', 'the', 'party', 'seven', 'senators', 'the', 'national', 'executive', 'last', 'night', 'rejected', 'aden', 'ridgeway', 'bid', 'to', 'become', 'interim', 'leader', 'in', 'favour', 'of', 'senator', 'greig', 'supporter', 'of', 'deposed', 'leader', 'natasha', 'stott', 'despoja', 'and', 'an', 'outspoken', 'gay', 'rights', 'activist'], ['cash', 'strapped', 'financial', 'services', 'group', 'amp', 'has', 'shelved', 'million', 'plan', 'to', 'buy', 'shares', 'back', 'from', 'investors', 'and', 'will', 'raise', 'million', 'in', 'fresh', 'capital', 'after', 'profits', 'crashed', 'in', 'the', 'six', 'months', 'to', 'june', 'chief', 'executive', 'paul', 'batchelor', 'said', 'the', 'result', 'was', 'solid', 'in', 'what', 'he', 'described', 'as', 'the', 'worst', 'conditions', 'for', 'stock', 'markets', 'in', 'years', 'amp', 'half', 'year', 'profit', 'sank', 'per', 'cent', 'to', 'million', 'or', 'share', 'as', 'australia', 'largest', 'investor', 'and', 'fund', 'manager', 'failed', 'to', 'hit', 'projected', 'per', 'cent', 'earnings', 'growth', 'targets', 'and', 'was', 'battered', 'by', 'falling', 'returns', 'on', 'share', 'markets']]

Notice that the testing corpus is just a list of lists and does not contain any tags.

Training the Model

Instantiate a Doc2Vec Object

Now, we'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 55 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time. Small datasets with short documents, like this one, can benefit from more training passes.


In [8]:
model = gensim.models.doc2vec.Doc2Vec(vector_size=50, min_count=2, epochs=55)

Build a Vocabulary


In [9]:
model.build_vocab(train_corpus)

Essentially, the vocabulary is a dictionary (accessible via model.wv.vocab) of all of the unique words extracted from the training corpus along with the count (e.g., model.wv.vocab['penalty'].count for counts for the word penalty).

Time to Train

If the BLAS library is being used, this should take no more than 3 seconds. If the BLAS library is not being used, this should take no more than 2 minutes, so use BLAS if you value your time.


In [11]:
%time model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)


CPU times: user 4.5 s, sys: 247 ms, total: 4.75 s
Wall time: 2.04 s

Inferring a Vector

One important thing to note is that you can now infer a vector for any piece of text without having to re-train the model by passing a list of words to the model.infer_vector function. This vector can then be compared with other vectors via cosine similarity.


In [12]:
model.infer_vector(['only', 'you', 'can', 'prevent', 'forest', 'fires'])


Out[12]:
array([ 0.03101196,  0.08118944,  0.10724881, -0.16268663, -0.12030419,
        0.07530276, -0.05967962,  0.01093007,  0.01722554, -0.16849394,
       -0.09248347,  0.00667514,  0.05426382, -0.0725852 ,  0.09535281,
       -0.12534387,  0.08636193, -0.1029434 , -0.07632427, -0.24741814,
       -0.1277334 , -0.09834807, -0.12880586, -0.07720284, -0.12248702,
       -0.15788661,  0.17826575, -0.12920539,  0.02845461, -0.12751418,
        0.06129557, -0.02319777,  0.11814108, -0.08767211, -0.04094559,
       -0.00681656,  0.00937355,  0.02168806, -0.03686712,  0.14234844,
       -0.01192134,  0.06787674, -0.25467244, -0.22923732, -0.03031967,
       -0.2362234 ,  0.1105942 ,  0.01180398,  0.01921744, -0.07667527],
      dtype=float32)

Assessing Model

To assess our new model, we'll first infer new vectors for each document of the training corpus, compare the inferred vectors with the training corpus, and then returning the rank of the document based on self-similarity. Basically, we're pretending as if the training corpus is some new unseen data and then seeing how they compare with the trained model. The expectation is that we've likely overfit our model (i.e., all of the ranks will be less than 2) and so we should be able to find similar documents very easily. Additionally, we'll keep track of the second ranks for a comparison of less similar documents.


In [13]:
ranks = []
second_ranks = []
for doc_id in range(len(train_corpus)):
    inferred_vector = model.infer_vector(train_corpus[doc_id].words)
    sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
    rank = [docid for docid, sim in sims].index(doc_id)
    ranks.append(rank)
    
    second_ranks.append(sims[1])

Let's count how each document ranks with respect to the training corpus


In [14]:
collections.Counter(ranks)  # Results vary due to random seeding and very small corpus


Out[14]:
Counter({0: 284, 1: 13, 2: 2, 4: 1})

Basically, greater than 95% of the inferred documents are found to be most similar to itself and about 5% of the time it is mistakenly most similar to another document. the checking of an inferred-vector against a training-vector is a sort of 'sanity check' as to whether the model is behaving in a usefully consistent manner, though not a real 'accuracy' value.

This is great and not entirely surprising. We can take a look at an example:


In [15]:
print('Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
    print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))


Document (299): «australia will take on france in the doubles rubber of the davis cup tennis final today with the tie levelled at wayne arthurs and todd woodbridge are scheduled to lead australia in the doubles against cedric pioline and fabrice santoro however changes can be made to the line up up to an hour before the match and australian team captain john fitzgerald suggested he might do just that we ll make team appraisal of the whole situation go over the pros and cons and make decision french team captain guy forget says he will not make changes but does not know what to expect from australia todd is the best doubles player in the world right now so expect him to play he said would probably use wayne arthurs but don know what to expect really pat rafter salvaged australia davis cup campaign yesterday with win in the second singles match rafter overcame an arm injury to defeat french number one sebastien grosjean in three sets the australian says he is happy with his form it not very pretty tennis there isn too many consistent bounces you are playing like said bit of classic old grass court rafter said rafter levelled the score after lleyton hewitt shock five set loss to nicholas escude in the first singles rubber but rafter says he felt no added pressure after hewitt defeat knew had good team to back me up even if we were down he said knew could win on the last day know the boys can win doubles so even if we were down still feel we are good enough team to win and vice versa they are good enough team to beat us as well»

SIMILAR/DISSIMILAR DOCS PER MODEL Doc2Vec(dm/m,d50,n5,w5,mc2,s0.001,t3):

MOST (299, 0.8637137413024902): «australia will take on france in the doubles rubber of the davis cup tennis final today with the tie levelled at wayne arthurs and todd woodbridge are scheduled to lead australia in the doubles against cedric pioline and fabrice santoro however changes can be made to the line up up to an hour before the match and australian team captain john fitzgerald suggested he might do just that we ll make team appraisal of the whole situation go over the pros and cons and make decision french team captain guy forget says he will not make changes but does not know what to expect from australia todd is the best doubles player in the world right now so expect him to play he said would probably use wayne arthurs but don know what to expect really pat rafter salvaged australia davis cup campaign yesterday with win in the second singles match rafter overcame an arm injury to defeat french number one sebastien grosjean in three sets the australian says he is happy with his form it not very pretty tennis there isn too many consistent bounces you are playing like said bit of classic old grass court rafter said rafter levelled the score after lleyton hewitt shock five set loss to nicholas escude in the first singles rubber but rafter says he felt no added pressure after hewitt defeat knew had good team to back me up even if we were down he said knew could win on the last day know the boys can win doubles so even if we were down still feel we are good enough team to win and vice versa they are good enough team to beat us as well»

MEDIAN (178, 0.2800390124320984): «year old middle eastern woman is said to be responding well to treatment after being diagnosed with typhoid in temporary holding centre on remote christmas island it could be hours before tests can confirm whether the disease has spread further two of the woman three children boy aged and year old girl have been quarantined with their mother in the christmas island hospital third child remains at the island sports hall where locals say conditions are crowded and hot all detainees on christmas island are being monitored by health team for signs of fever or abdominal pains the key symptoms of typhoid which is spread by contact with contaminated food or water hygiene measures have also been stepped up the western australian health department is briefing medical staff on infection control procedures but locals have expressed concern the disease could spread to the wider community»

LEAST (11, 0.01867116428911686): «peru has entered two days of official mourning for the more than people killed in fire that destroyed part of downtown lima police say the fire began when fireworks cache exploded in shop just four blocks from peru congress in heritage listed area famed for its spanish colonial era architecture early evening crowds buying traditional fireworks for new year eve celebrations were trapped by the flames as they raced through surrounding markets and four storey apartment buildings local residents blame vendors of illegal fireworks and say the death toll was exacerbated by poor traffic control in the adjoining narrow street where cars themselves engulfed by fire trapped fleeing victims hospitals have urged the public to donate medicine for the hundreds of burns victims peru president alejandro toledo has cut short his beach holiday to oversee an inquiry»

Notice above that the most similar document is has a similarity score of ~80% (or higher). However, the similarity score for the second ranked documents should be significantly lower (assuming the documents are in fact different) and the reasoning becomes obvious when we examine the text itself


In [16]:
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(train_corpus) - 1)

# Compare and print the most/median/least similar documents from the train corpus
print('Train Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
sim_id = second_ranks[doc_id]
print('Similar Document {}: «{}»\n'.format(sim_id, ' '.join(train_corpus[sim_id[0]].words)))


Train Document (186): «united nationals secretary general kofi annan has accepted the nobel peace prize in the norwegian capital oslo declaring that to save one life is to save humanity itself mr annan told gala audience the world must respect the individual whose fundamental rights he says have been sacrificed too often for the good of the state the year old un chief native of ghana shares this year th nobel peace prize with the united nations as whole his award was for bringing new life to the world body in his fight for human rights and against aids and terrorism»

Similar Document (207, 0.6752535104751587): «geoff huegill has continued his record breaking ways at the world cup short course swimming in melbourne bettering the australian record in the metres butterfly huegill beat fellow australian michael klim backing up after last night setting world record in the metres butterfly»

Testing the Model

Using the same approach above, we'll infer the vector for a randomly chosen test document, and compare the document to our model by eye.


In [17]:
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(test_corpus) - 1)
inferred_vector = model.infer_vector(test_corpus[doc_id])
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))

# Compare and print the most/median/least similar documents from the train corpus
print('Test Document ({}): «{}»\n'.format(doc_id, ' '.join(test_corpus[doc_id])))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
    print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))


Test Document (23): «china said sunday it issued new regulations controlling the export of missile technology taking steps to ease concerns about transferring sensitive equipment to middle east countries particularly iran however the new rules apparently do not ban outright the transfer of specific items something washington long has urged beijing to do»

SIMILAR/DISSIMILAR DOCS PER MODEL Doc2Vec(dm/m,d50,n5,w5,mc2,s0.001,t3):

MOST (265, 0.42007705569267273): «the federal government is under fire from unions over new departmental report which recommends australia outsource information technology it to india the document says india has low cost skilled workforce the minister for foreign affairs and trade alexander downer has given his support to the document from his department entitled india new economy old economy the report says sectors like it finance and offer attractive direct investment opportunities it also says australian firms could become more competitive by outsourcing to the indian it sector the community and public sector union wendy caird says the government seems to be encouraging local companies to export jobs to india think that quite alarming obviously labour is great deal cheaper in india and that assisted by the indian government removing labour laws and bankruptcy laws ms caird said the union says while the initiative may create jobs in india it will not help australia rising unemployment»

MEDIAN (257, 0.11956833302974701): «hundreds of fans stood vigil today for the immersion of george harrison ashes into the ganges river at the hindu holy city of benares but officials and sect leaders remained tightlipped on when or where last rites for the former beatle long time devotee of the hindu hare krishna sect would take place he was closely attached to benares where devout hindus come to scatter the ashes of their dead relatives in the ganges in ritual symbolising the journey of the soul towards eternal salvation the beatles former lead guitarist died on thursday of cancer aged amid chants and prayers of hare krishna devotees who were at his bedside according to details of the ceremony released by members of the hare krishna movement yesterday harrison widow olivia accompanied by son dhani were to scatter some of the ashes early this morning in discreet ceremony at hinduism holy river some of harrison ashes could also be immersed in the ganges at allahabad another holy spot for devout hindus about kilometres upstream from benares spokesman for the hare krishna group said tomorrow harrison family members were supposed to take part in special prayer meeting in vrindavan the birthplace of lord krishna km north of the indian capital the news brought hundreds of journalists fans and curious onlookers to benares odd ghats platforms or steps from which the ashes are strewn into the river this morning but as the day wore on local administration officials and hare krishna devotees in benares refused to confirm when and where along the ganges the ceremony would take place»

LEAST (267, -0.22617124021053314): «israeli prime minister ariel sharon has opened an emergency security cabinet meeting after placing blame for recent suicide attacks squarely on palestinian leader yasser arafat called an urgent meeting of the heads of all the security systems and very shortly the government will hold special session the government will meet in order to make decisions about how to deal further with terrorism he said in national address on public television the government was to discuss its policy on the palestinian authority which mr sharon implied was the enemy of the jewish state and should bear the consequences those who rise up against us to kill us are responsible for their own destruction he said in statement interpreted by palestinian official as call for war arafat has made his strategic choices strategy of terrorism in choosing to try to win political accomplishments through murder and in choosing to allow the ruthless killing of civilians arafat has chosen the path of terrorism mr sharon said the government represents practically the whole of the israel public and we have the paramount goal and need for unity in order to cope with all the brutalities facing us he added tonight we heard declaration of war said chief palestinian negotiator saeb erakat on cnn television sharon has chosen the path of darkness even before his address israeli helicopters and warplanes attacked targets in the west bank and gaza strip including arafat offices and police headquarters in jenin and the palestinian leader three helicopters in gaza city the air strikes were launched on palestinian targets in the wake of weekend suicide attacks by the islamic militant group hamas which left israelis dead meanwhile hamas has defied the palestinian state of emergency and called for more suicide attacks against israel at the funeral of gunman who killed settler more than supporters of the hardline group gathered to bury year old muslim al aarage one of two palestinians who shot the settler dead on sunday in the north of the gaza strip before being killed by israeli soldiers the suicide operations will continue as long as the enemy continues its occupation of palestinian lands in the gaza strip and west bank militant from the group told crowd with loudspeaker when sharon kills women and children our people have the right to defend ourselves then they call us terrorists he said every religion and law in the world gives us the right to defend ourselves he said shortly before the air strikes began security services have arrested some militants from hamas and its smaller rival islamic jihad in the crackdown since sunday human rights group amnesty international has condemned deliberate attacks by the palestinian suicide bombers at the weekend these attacks are horrifying and tragic amnesty said in statement we call on armed groups to end immediately the direct targeting of civilians which contravenes the most fundamental principles of humanity the organisation called on the israeli government and the palestinian authority to remember that no abuses of human rights by armed groups can excuse violations of fundamental human rights and humanitarian law»

Wrapping Up

That's it! Doc2Vec is a great way to explore relationships between documents.