Word2Vec Tutorial

In case you missed the buzz, word2vec is a widely featured as a member of the “new wave” of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs”.

Word2vec is very useful in automatic text tagging, recommender systems and machine translation.

Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.

This tutorial

In this tutorial you will learn how to train and evaluate word2vec models on your business data.

Preparing the Input

Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence is a list of words (utf8 strings):


In [1]:
# import modules & set up logging
import gensim, logging

logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

In [2]:
sentences = [['first', 'sentence'], ['second', 'sentence']]
# train word2vec on the two sentences
model = gensim.models.Word2Vec(sentences, min_count=1)


2017-12-21 16:25:19,117 : INFO : collecting all words and their counts
2017-12-21 16:25:19,119 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-12-21 16:25:19,120 : INFO : collected 3 word types from a corpus of 4 raw words and 2 sentences
2017-12-21 16:25:19,120 : INFO : Loading a fresh vocabulary
2017-12-21 16:25:19,121 : INFO : min_count=1 retains 3 unique words (100% of original 3, drops 0)
2017-12-21 16:25:19,122 : INFO : min_count=1 leaves 4 word corpus (100% of original 4, drops 0)
2017-12-21 16:25:19,123 : INFO : deleting the raw counts dictionary of 3 items
2017-12-21 16:25:19,124 : INFO : sample=0.001 downsamples 3 most-common words
2017-12-21 16:25:19,124 : INFO : downsampling leaves estimated 0 word corpus (5.7% of prior 4)
2017-12-21 16:25:19,125 : INFO : estimated required memory for 3 words and 100 dimensions: 3900 bytes
2017-12-21 16:25:19,126 : INFO : resetting layer weights
2017-12-21 16:25:19,127 : INFO : training model with 3 workers on 3 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-12-21 16:25:19,130 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-12-21 16:25:19,130 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-12-21 16:25:19,131 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-12-21 16:25:19,131 : INFO : training on 20 raw words (0 effective words) took 0.0s, 0 effective words/s
2017-12-21 16:25:19,132 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.

Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentence…

For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line:


In [3]:
# create some toy data to use with the following example
import smart_open, os

if not os.path.exists('./data/'):
    os.makedirs('./data/')

filenames = ['./data/f1.txt', './data/f2.txt']

for i, fname in enumerate(filenames):
    with smart_open.smart_open(fname, 'w') as fout:
        for line in sentences[i]:
            fout.write(line + '\n')

In [4]:
from smart_open import smart_open
class MySentences(object):
    def __init__(self, dirname):
        self.dirname = dirname
 
    def __iter__(self):
        for fname in os.listdir(self.dirname):
            for line in smart_open(os.path.join(self.dirname, fname), 'rb'):
                yield line.split()

In [5]:
sentences = MySentences('./data/') # a memory-friendly iterator
print(list(sentences))


[['second'], ['sentence'], ['first'], ['sentence']]

In [6]:
# generate the Word2Vec model
model = gensim.models.Word2Vec(sentences, min_count=1)


2017-12-21 16:25:19,155 : INFO : collecting all words and their counts
2017-12-21 16:25:19,156 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-12-21 16:25:19,156 : INFO : collected 3 word types from a corpus of 4 raw words and 4 sentences
2017-12-21 16:25:19,157 : INFO : Loading a fresh vocabulary
2017-12-21 16:25:19,158 : INFO : min_count=1 retains 3 unique words (100% of original 3, drops 0)
2017-12-21 16:25:19,159 : INFO : min_count=1 leaves 4 word corpus (100% of original 4, drops 0)
2017-12-21 16:25:19,161 : INFO : deleting the raw counts dictionary of 3 items
2017-12-21 16:25:19,162 : INFO : sample=0.001 downsamples 3 most-common words
2017-12-21 16:25:19,164 : INFO : downsampling leaves estimated 0 word corpus (5.7% of prior 4)
2017-12-21 16:25:19,165 : INFO : estimated required memory for 3 words and 100 dimensions: 3900 bytes
2017-12-21 16:25:19,167 : INFO : resetting layer weights
2017-12-21 16:25:19,168 : INFO : training model with 3 workers on 3 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-12-21 16:25:19,172 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-12-21 16:25:19,173 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-12-21 16:25:19,173 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-12-21 16:25:19,174 : INFO : training on 20 raw words (0 effective words) took 0.0s, 0 effective words/s
2017-12-21 16:25:19,174 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

In [7]:
print(model)
print(model.wv.vocab)


Word2Vec(vocab=3, size=100, alpha=0.025)
{'second': <gensim.models.keyedvectors.Vocab object at 0x7f20aa771a90>, 'sentence': <gensim.models.keyedvectors.Vocab object at 0x7f20852be3d0>, 'first': <gensim.models.keyedvectors.Vocab object at 0x7f20ba0a2d10>}

Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences iterator and word2vec doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.

Note to advanced users: calling Word2Vec(sentences, iter=1) will run two passes over the sentences iterator. In general it runs iter+1 passes. By the way, the default value is iter=5 to comply with Google's word2vec in C language.

  1. The first pass collects words and their frequencies to build an internal dictionary tree structure.
  2. The second pass trains the neural model.

These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and you’re able to initialize the vocabulary some other way:


In [8]:
# build the same model, making the 2 steps explicit
new_model = gensim.models.Word2Vec(min_count=1)  # an empty model, no training
new_model.build_vocab(sentences)                 # can be a non-repeatable, 1-pass generator     
new_model.train(sentences, total_examples=new_model.corpus_count, epochs=new_model.iter)                       
# can be a non-repeatable, 1-pass generator


2017-12-21 16:25:19,190 : INFO : collecting all words and their counts
2017-12-21 16:25:19,191 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-12-21 16:25:19,192 : INFO : collected 3 word types from a corpus of 4 raw words and 4 sentences
2017-12-21 16:25:19,192 : INFO : Loading a fresh vocabulary
2017-12-21 16:25:19,193 : INFO : min_count=1 retains 3 unique words (100% of original 3, drops 0)
2017-12-21 16:25:19,194 : INFO : min_count=1 leaves 4 word corpus (100% of original 4, drops 0)
2017-12-21 16:25:19,194 : INFO : deleting the raw counts dictionary of 3 items
2017-12-21 16:25:19,195 : INFO : sample=0.001 downsamples 3 most-common words
2017-12-21 16:25:19,195 : INFO : downsampling leaves estimated 0 word corpus (5.7% of prior 4)
2017-12-21 16:25:19,196 : INFO : estimated required memory for 3 words and 100 dimensions: 3900 bytes
2017-12-21 16:25:19,197 : INFO : resetting layer weights
2017-12-21 16:25:19,197 : INFO : training model with 3 workers on 3 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-12-21 16:25:19,201 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-12-21 16:25:19,202 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-12-21 16:25:19,202 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-12-21 16:25:19,203 : INFO : training on 20 raw words (0 effective words) took 0.0s, 0 effective words/s
2017-12-21 16:25:19,203 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay
Out[8]:
0

In [9]:
print(new_model)
print(model.wv.vocab)


Word2Vec(vocab=3, size=100, alpha=0.025)
{'second': <gensim.models.keyedvectors.Vocab object at 0x7f20aa771a90>, 'sentence': <gensim.models.keyedvectors.Vocab object at 0x7f20852be3d0>, 'first': <gensim.models.keyedvectors.Vocab object at 0x7f20ba0a2d10>}

More data would be nice

For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim):


In [10]:
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep
lee_train_file = test_data_dir + 'lee_background.cor'

In [11]:
class MyText(object):
    def __iter__(self):
        for line in open(lee_train_file):
            # assume there's one document per line, tokens separated by whitespace
            yield line.lower().split()

sentences = MyText()

print(sentences)


<__main__.MyText object at 0x7f20852714d0>

Training

Word2Vec accepts several parameters that affect both training speed and quality.

min_count

min_count is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them:


In [12]:
# default value of min_count=5
model = gensim.models.Word2Vec(sentences, min_count=10)


2017-12-21 16:25:19,238 : INFO : collecting all words and their counts
2017-12-21 16:25:19,239 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-12-21 16:25:19,260 : INFO : collected 10186 word types from a corpus of 59890 raw words and 300 sentences
2017-12-21 16:25:19,262 : INFO : Loading a fresh vocabulary
2017-12-21 16:25:19,269 : INFO : min_count=10 retains 806 unique words (7% of original 10186, drops 9380)
2017-12-21 16:25:19,270 : INFO : min_count=10 leaves 40964 word corpus (68% of original 59890, drops 18926)
2017-12-21 16:25:19,274 : INFO : deleting the raw counts dictionary of 10186 items
2017-12-21 16:25:19,276 : INFO : sample=0.001 downsamples 54 most-common words
2017-12-21 16:25:19,277 : INFO : downsampling leaves estimated 26224 word corpus (64.0% of prior 40964)
2017-12-21 16:25:19,278 : INFO : estimated required memory for 806 words and 100 dimensions: 1047800 bytes
2017-12-21 16:25:19,284 : INFO : resetting layer weights
2017-12-21 16:25:19,295 : INFO : training model with 3 workers on 806 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-12-21 16:25:19,425 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-12-21 16:25:19,427 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-12-21 16:25:19,429 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-12-21 16:25:19,430 : INFO : training on 299450 raw words (130961 effective words) took 0.1s, 984255 effective words/s

size

size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto.

Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.


In [13]:
# default value of size=100
model = gensim.models.Word2Vec(sentences, size=200)


2017-12-21 16:25:19,438 : INFO : collecting all words and their counts
2017-12-21 16:25:19,439 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-12-21 16:25:19,455 : INFO : collected 10186 word types from a corpus of 59890 raw words and 300 sentences
2017-12-21 16:25:19,456 : INFO : Loading a fresh vocabulary
2017-12-21 16:25:19,464 : INFO : min_count=5 retains 1723 unique words (16% of original 10186, drops 8463)
2017-12-21 16:25:19,465 : INFO : min_count=5 leaves 46858 word corpus (78% of original 59890, drops 13032)
2017-12-21 16:25:19,473 : INFO : deleting the raw counts dictionary of 10186 items
2017-12-21 16:25:19,474 : INFO : sample=0.001 downsamples 49 most-common words
2017-12-21 16:25:19,475 : INFO : downsampling leaves estimated 32849 word corpus (70.1% of prior 46858)
2017-12-21 16:25:19,476 : INFO : estimated required memory for 1723 words and 200 dimensions: 3618300 bytes
2017-12-21 16:25:19,482 : INFO : resetting layer weights
2017-12-21 16:25:19,504 : INFO : training model with 3 workers on 1723 vocabulary and 200 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-12-21 16:25:19,699 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-12-21 16:25:19,707 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-12-21 16:25:19,709 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-12-21 16:25:19,710 : INFO : training on 299450 raw words (164316 effective words) took 0.2s, 807544 effective words/s

workers

workers, the last of the major parameters (full list here) is for training parallelization, to speed up training:


In [14]:
# default value of workers=3 (tutorial says 1...)
model = gensim.models.Word2Vec(sentences, workers=4)


2017-12-21 16:25:19,717 : INFO : collecting all words and their counts
2017-12-21 16:25:19,718 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-12-21 16:25:19,738 : INFO : collected 10186 word types from a corpus of 59890 raw words and 300 sentences
2017-12-21 16:25:19,739 : INFO : Loading a fresh vocabulary
2017-12-21 16:25:19,787 : INFO : min_count=5 retains 1723 unique words (16% of original 10186, drops 8463)
2017-12-21 16:25:19,788 : INFO : min_count=5 leaves 46858 word corpus (78% of original 59890, drops 13032)
2017-12-21 16:25:19,796 : INFO : deleting the raw counts dictionary of 10186 items
2017-12-21 16:25:19,798 : INFO : sample=0.001 downsamples 49 most-common words
2017-12-21 16:25:19,799 : INFO : downsampling leaves estimated 32849 word corpus (70.1% of prior 46858)
2017-12-21 16:25:19,800 : INFO : estimated required memory for 1723 words and 100 dimensions: 2239900 bytes
2017-12-21 16:25:19,807 : INFO : resetting layer weights
2017-12-21 16:25:19,831 : INFO : training model with 4 workers on 1723 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-12-21 16:25:19,957 : INFO : worker thread finished; awaiting finish of 3 more threads
2017-12-21 16:25:19,959 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-12-21 16:25:19,962 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-12-21 16:25:19,965 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-12-21 16:25:19,966 : INFO : training on 299450 raw words (164263 effective words) took 0.1s, 1234170 effective words/s
2017-12-21 16:25:19,968 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

The workers parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow).

Memory

At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).

Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.

There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.

Evaluating

Word2Vec training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application.

Google has released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder.

For example a syntactic analogy of comparative type is bad:worse;good:?. There are total of 9 types of syntactic comparisons in the dataset like plural nouns and nouns of opposite meaning.

The semantic questions contain five types of semantic analogies, such as capital cities (Paris:France;Tokyo:?) or family members (brother:sister;dad:?).

Gensim supports the same evaluation set, in exactly the same format:


In [15]:
model.accuracy('./datasets/questions-words.txt')


2017-12-21 16:25:20,022 : INFO : precomputing L2-norms of word weight vectors
2017-12-21 16:25:20,037 : INFO : family: 0.0% (0/2)
2017-12-21 16:25:20,064 : INFO : gram3-comparative: 0.0% (0/12)
2017-12-21 16:25:20,077 : INFO : gram4-superlative: 0.0% (0/12)
2017-12-21 16:25:20,090 : INFO : gram5-present-participle: 0.0% (0/20)
2017-12-21 16:25:20,108 : INFO : gram6-nationality-adjective: 0.0% (0/20)
2017-12-21 16:25:20,125 : INFO : gram7-past-tense: 0.0% (0/20)
2017-12-21 16:25:20,142 : INFO : gram8-plural: 0.0% (0/12)
2017-12-21 16:25:20,151 : INFO : total: 0.0% (0/98)
Out[15]:
[{'correct': [], 'incorrect': [], 'section': u'capital-common-countries'},
 {'correct': [], 'incorrect': [], 'section': u'capital-world'},
 {'correct': [], 'incorrect': [], 'section': u'currency'},
 {'correct': [], 'incorrect': [], 'section': u'city-in-state'},
 {'correct': [],
  'incorrect': [(u'HE', u'SHE', u'HIS', u'HER'),
   (u'HIS', u'HER', u'HE', u'SHE')],
  'section': u'family'},
 {'correct': [], 'incorrect': [], 'section': u'gram1-adjective-to-adverb'},
 {'correct': [], 'incorrect': [], 'section': u'gram2-opposite'},
 {'correct': [],
  'incorrect': [(u'GOOD', u'BETTER', u'GREAT', u'GREATER'),
   (u'GOOD', u'BETTER', u'LONG', u'LONGER'),
   (u'GOOD', u'BETTER', u'LOW', u'LOWER'),
   (u'GREAT', u'GREATER', u'LONG', u'LONGER'),
   (u'GREAT', u'GREATER', u'LOW', u'LOWER'),
   (u'GREAT', u'GREATER', u'GOOD', u'BETTER'),
   (u'LONG', u'LONGER', u'LOW', u'LOWER'),
   (u'LONG', u'LONGER', u'GOOD', u'BETTER'),
   (u'LONG', u'LONGER', u'GREAT', u'GREATER'),
   (u'LOW', u'LOWER', u'GOOD', u'BETTER'),
   (u'LOW', u'LOWER', u'GREAT', u'GREATER'),
   (u'LOW', u'LOWER', u'LONG', u'LONGER')],
  'section': u'gram3-comparative'},
 {'correct': [],
  'incorrect': [(u'BIG', u'BIGGEST', u'GOOD', u'BEST'),
   (u'BIG', u'BIGGEST', u'GREAT', u'GREATEST'),
   (u'BIG', u'BIGGEST', u'LARGE', u'LARGEST'),
   (u'GOOD', u'BEST', u'GREAT', u'GREATEST'),
   (u'GOOD', u'BEST', u'LARGE', u'LARGEST'),
   (u'GOOD', u'BEST', u'BIG', u'BIGGEST'),
   (u'GREAT', u'GREATEST', u'LARGE', u'LARGEST'),
   (u'GREAT', u'GREATEST', u'BIG', u'BIGGEST'),
   (u'GREAT', u'GREATEST', u'GOOD', u'BEST'),
   (u'LARGE', u'LARGEST', u'BIG', u'BIGGEST'),
   (u'LARGE', u'LARGEST', u'GOOD', u'BEST'),
   (u'LARGE', u'LARGEST', u'GREAT', u'GREATEST')],
  'section': u'gram4-superlative'},
 {'correct': [],
  'incorrect': [(u'GO', u'GOING', u'LOOK', u'LOOKING'),
   (u'GO', u'GOING', u'PLAY', u'PLAYING'),
   (u'GO', u'GOING', u'RUN', u'RUNNING'),
   (u'GO', u'GOING', u'SAY', u'SAYING'),
   (u'LOOK', u'LOOKING', u'PLAY', u'PLAYING'),
   (u'LOOK', u'LOOKING', u'RUN', u'RUNNING'),
   (u'LOOK', u'LOOKING', u'SAY', u'SAYING'),
   (u'LOOK', u'LOOKING', u'GO', u'GOING'),
   (u'PLAY', u'PLAYING', u'RUN', u'RUNNING'),
   (u'PLAY', u'PLAYING', u'SAY', u'SAYING'),
   (u'PLAY', u'PLAYING', u'GO', u'GOING'),
   (u'PLAY', u'PLAYING', u'LOOK', u'LOOKING'),
   (u'RUN', u'RUNNING', u'SAY', u'SAYING'),
   (u'RUN', u'RUNNING', u'GO', u'GOING'),
   (u'RUN', u'RUNNING', u'LOOK', u'LOOKING'),
   (u'RUN', u'RUNNING', u'PLAY', u'PLAYING'),
   (u'SAY', u'SAYING', u'GO', u'GOING'),
   (u'SAY', u'SAYING', u'LOOK', u'LOOKING'),
   (u'SAY', u'SAYING', u'PLAY', u'PLAYING'),
   (u'SAY', u'SAYING', u'RUN', u'RUNNING')],
  'section': u'gram5-present-participle'},
 {'correct': [],
  'incorrect': [(u'AUSTRALIA', u'AUSTRALIAN', u'FRANCE', u'FRENCH'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'INDIA', u'INDIAN'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'ISRAEL', u'ISRAELI'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'SWITZERLAND', u'SWISS'),
   (u'FRANCE', u'FRENCH', u'INDIA', u'INDIAN'),
   (u'FRANCE', u'FRENCH', u'ISRAEL', u'ISRAELI'),
   (u'FRANCE', u'FRENCH', u'SWITZERLAND', u'SWISS'),
   (u'FRANCE', u'FRENCH', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'INDIA', u'INDIAN', u'ISRAEL', u'ISRAELI'),
   (u'INDIA', u'INDIAN', u'SWITZERLAND', u'SWISS'),
   (u'INDIA', u'INDIAN', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'INDIA', u'INDIAN', u'FRANCE', u'FRENCH'),
   (u'ISRAEL', u'ISRAELI', u'SWITZERLAND', u'SWISS'),
   (u'ISRAEL', u'ISRAELI', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'ISRAEL', u'ISRAELI', u'FRANCE', u'FRENCH'),
   (u'ISRAEL', u'ISRAELI', u'INDIA', u'INDIAN'),
   (u'SWITZERLAND', u'SWISS', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'SWITZERLAND', u'SWISS', u'FRANCE', u'FRENCH'),
   (u'SWITZERLAND', u'SWISS', u'INDIA', u'INDIAN'),
   (u'SWITZERLAND', u'SWISS', u'ISRAEL', u'ISRAELI')],
  'section': u'gram6-nationality-adjective'},
 {'correct': [],
  'incorrect': [(u'GOING', u'WENT', u'PAYING', u'PAID'),
   (u'GOING', u'WENT', u'PLAYING', u'PLAYED'),
   (u'GOING', u'WENT', u'SAYING', u'SAID'),
   (u'GOING', u'WENT', u'TAKING', u'TOOK'),
   (u'PAYING', u'PAID', u'PLAYING', u'PLAYED'),
   (u'PAYING', u'PAID', u'SAYING', u'SAID'),
   (u'PAYING', u'PAID', u'TAKING', u'TOOK'),
   (u'PAYING', u'PAID', u'GOING', u'WENT'),
   (u'PLAYING', u'PLAYED', u'SAYING', u'SAID'),
   (u'PLAYING', u'PLAYED', u'TAKING', u'TOOK'),
   (u'PLAYING', u'PLAYED', u'GOING', u'WENT'),
   (u'PLAYING', u'PLAYED', u'PAYING', u'PAID'),
   (u'SAYING', u'SAID', u'TAKING', u'TOOK'),
   (u'SAYING', u'SAID', u'GOING', u'WENT'),
   (u'SAYING', u'SAID', u'PAYING', u'PAID'),
   (u'SAYING', u'SAID', u'PLAYING', u'PLAYED'),
   (u'TAKING', u'TOOK', u'GOING', u'WENT'),
   (u'TAKING', u'TOOK', u'PAYING', u'PAID'),
   (u'TAKING', u'TOOK', u'PLAYING', u'PLAYED'),
   (u'TAKING', u'TOOK', u'SAYING', u'SAID')],
  'section': u'gram7-past-tense'},
 {'correct': [],
  'incorrect': [(u'BUILDING', u'BUILDINGS', u'CAR', u'CARS'),
   (u'BUILDING', u'BUILDINGS', u'CHILD', u'CHILDREN'),
   (u'BUILDING', u'BUILDINGS', u'MAN', u'MEN'),
   (u'CAR', u'CARS', u'CHILD', u'CHILDREN'),
   (u'CAR', u'CARS', u'MAN', u'MEN'),
   (u'CAR', u'CARS', u'BUILDING', u'BUILDINGS'),
   (u'CHILD', u'CHILDREN', u'MAN', u'MEN'),
   (u'CHILD', u'CHILDREN', u'BUILDING', u'BUILDINGS'),
   (u'CHILD', u'CHILDREN', u'CAR', u'CARS'),
   (u'MAN', u'MEN', u'BUILDING', u'BUILDINGS'),
   (u'MAN', u'MEN', u'CAR', u'CARS'),
   (u'MAN', u'MEN', u'CHILD', u'CHILDREN')],
  'section': u'gram8-plural'},
 {'correct': [], 'incorrect': [], 'section': u'gram9-plural-verbs'},
 {'correct': [],
  'incorrect': [(u'HE', u'SHE', u'HIS', u'HER'),
   (u'HIS', u'HER', u'HE', u'SHE'),
   (u'GOOD', u'BETTER', u'GREAT', u'GREATER'),
   (u'GOOD', u'BETTER', u'LONG', u'LONGER'),
   (u'GOOD', u'BETTER', u'LOW', u'LOWER'),
   (u'GREAT', u'GREATER', u'LONG', u'LONGER'),
   (u'GREAT', u'GREATER', u'LOW', u'LOWER'),
   (u'GREAT', u'GREATER', u'GOOD', u'BETTER'),
   (u'LONG', u'LONGER', u'LOW', u'LOWER'),
   (u'LONG', u'LONGER', u'GOOD', u'BETTER'),
   (u'LONG', u'LONGER', u'GREAT', u'GREATER'),
   (u'LOW', u'LOWER', u'GOOD', u'BETTER'),
   (u'LOW', u'LOWER', u'GREAT', u'GREATER'),
   (u'LOW', u'LOWER', u'LONG', u'LONGER'),
   (u'BIG', u'BIGGEST', u'GOOD', u'BEST'),
   (u'BIG', u'BIGGEST', u'GREAT', u'GREATEST'),
   (u'BIG', u'BIGGEST', u'LARGE', u'LARGEST'),
   (u'GOOD', u'BEST', u'GREAT', u'GREATEST'),
   (u'GOOD', u'BEST', u'LARGE', u'LARGEST'),
   (u'GOOD', u'BEST', u'BIG', u'BIGGEST'),
   (u'GREAT', u'GREATEST', u'LARGE', u'LARGEST'),
   (u'GREAT', u'GREATEST', u'BIG', u'BIGGEST'),
   (u'GREAT', u'GREATEST', u'GOOD', u'BEST'),
   (u'LARGE', u'LARGEST', u'BIG', u'BIGGEST'),
   (u'LARGE', u'LARGEST', u'GOOD', u'BEST'),
   (u'LARGE', u'LARGEST', u'GREAT', u'GREATEST'),
   (u'GO', u'GOING', u'LOOK', u'LOOKING'),
   (u'GO', u'GOING', u'PLAY', u'PLAYING'),
   (u'GO', u'GOING', u'RUN', u'RUNNING'),
   (u'GO', u'GOING', u'SAY', u'SAYING'),
   (u'LOOK', u'LOOKING', u'PLAY', u'PLAYING'),
   (u'LOOK', u'LOOKING', u'RUN', u'RUNNING'),
   (u'LOOK', u'LOOKING', u'SAY', u'SAYING'),
   (u'LOOK', u'LOOKING', u'GO', u'GOING'),
   (u'PLAY', u'PLAYING', u'RUN', u'RUNNING'),
   (u'PLAY', u'PLAYING', u'SAY', u'SAYING'),
   (u'PLAY', u'PLAYING', u'GO', u'GOING'),
   (u'PLAY', u'PLAYING', u'LOOK', u'LOOKING'),
   (u'RUN', u'RUNNING', u'SAY', u'SAYING'),
   (u'RUN', u'RUNNING', u'GO', u'GOING'),
   (u'RUN', u'RUNNING', u'LOOK', u'LOOKING'),
   (u'RUN', u'RUNNING', u'PLAY', u'PLAYING'),
   (u'SAY', u'SAYING', u'GO', u'GOING'),
   (u'SAY', u'SAYING', u'LOOK', u'LOOKING'),
   (u'SAY', u'SAYING', u'PLAY', u'PLAYING'),
   (u'SAY', u'SAYING', u'RUN', u'RUNNING'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'FRANCE', u'FRENCH'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'INDIA', u'INDIAN'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'ISRAEL', u'ISRAELI'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'SWITZERLAND', u'SWISS'),
   (u'FRANCE', u'FRENCH', u'INDIA', u'INDIAN'),
   (u'FRANCE', u'FRENCH', u'ISRAEL', u'ISRAELI'),
   (u'FRANCE', u'FRENCH', u'SWITZERLAND', u'SWISS'),
   (u'FRANCE', u'FRENCH', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'INDIA', u'INDIAN', u'ISRAEL', u'ISRAELI'),
   (u'INDIA', u'INDIAN', u'SWITZERLAND', u'SWISS'),
   (u'INDIA', u'INDIAN', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'INDIA', u'INDIAN', u'FRANCE', u'FRENCH'),
   (u'ISRAEL', u'ISRAELI', u'SWITZERLAND', u'SWISS'),
   (u'ISRAEL', u'ISRAELI', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'ISRAEL', u'ISRAELI', u'FRANCE', u'FRENCH'),
   (u'ISRAEL', u'ISRAELI', u'INDIA', u'INDIAN'),
   (u'SWITZERLAND', u'SWISS', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'SWITZERLAND', u'SWISS', u'FRANCE', u'FRENCH'),
   (u'SWITZERLAND', u'SWISS', u'INDIA', u'INDIAN'),
   (u'SWITZERLAND', u'SWISS', u'ISRAEL', u'ISRAELI'),
   (u'GOING', u'WENT', u'PAYING', u'PAID'),
   (u'GOING', u'WENT', u'PLAYING', u'PLAYED'),
   (u'GOING', u'WENT', u'SAYING', u'SAID'),
   (u'GOING', u'WENT', u'TAKING', u'TOOK'),
   (u'PAYING', u'PAID', u'PLAYING', u'PLAYED'),
   (u'PAYING', u'PAID', u'SAYING', u'SAID'),
   (u'PAYING', u'PAID', u'TAKING', u'TOOK'),
   (u'PAYING', u'PAID', u'GOING', u'WENT'),
   (u'PLAYING', u'PLAYED', u'SAYING', u'SAID'),
   (u'PLAYING', u'PLAYED', u'TAKING', u'TOOK'),
   (u'PLAYING', u'PLAYED', u'GOING', u'WENT'),
   (u'PLAYING', u'PLAYED', u'PAYING', u'PAID'),
   (u'SAYING', u'SAID', u'TAKING', u'TOOK'),
   (u'SAYING', u'SAID', u'GOING', u'WENT'),
   (u'SAYING', u'SAID', u'PAYING', u'PAID'),
   (u'SAYING', u'SAID', u'PLAYING', u'PLAYED'),
   (u'TAKING', u'TOOK', u'GOING', u'WENT'),
   (u'TAKING', u'TOOK', u'PAYING', u'PAID'),
   (u'TAKING', u'TOOK', u'PLAYING', u'PLAYED'),
   (u'TAKING', u'TOOK', u'SAYING', u'SAID'),
   (u'BUILDING', u'BUILDINGS', u'CAR', u'CARS'),
   (u'BUILDING', u'BUILDINGS', u'CHILD', u'CHILDREN'),
   (u'BUILDING', u'BUILDINGS', u'MAN', u'MEN'),
   (u'CAR', u'CARS', u'CHILD', u'CHILDREN'),
   (u'CAR', u'CARS', u'MAN', u'MEN'),
   (u'CAR', u'CARS', u'BUILDING', u'BUILDINGS'),
   (u'CHILD', u'CHILDREN', u'MAN', u'MEN'),
   (u'CHILD', u'CHILDREN', u'BUILDING', u'BUILDINGS'),
   (u'CHILD', u'CHILDREN', u'CAR', u'CARS'),
   (u'MAN', u'MEN', u'BUILDING', u'BUILDINGS'),
   (u'MAN', u'MEN', u'CAR', u'CARS'),
   (u'MAN', u'MEN', u'CHILD', u'CHILDREN')],
  'section': 'total'}]

This accuracy takes an optional parameter restrict_vocab which limits which test examples are to be considered.

In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.

By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contains word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, 'coast' and 'shore' are very similar as they appear in the same context. At the same time 'clothes' and 'closet' are less similar because they are related but not interchangeable.


In [16]:
model.evaluate_word_pairs(test_data_dir + 'wordsim353.tsv')


/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `evaluate_word_pairs` (Method will be removed in 4.0.0, use self.wv.evaluate_word_pairs() instead).
  """Entry point for launching an IPython kernel.
2017-12-21 16:25:20,205 : INFO : Pearson correlation coefficient against /usr/local/lib/python2.7/dist-packages/gensim-3.2.0-py2.7-linux-x86_64.egg/gensim/test/test_data/wordsim353.tsv: 0.1020
2017-12-21 16:25:20,208 : INFO : Spearman rank-order correlation coefficient against /usr/local/lib/python2.7/dist-packages/gensim-3.2.0-py2.7-linux-x86_64.egg/gensim/test/test_data/wordsim353.tsv: 0.0816
2017-12-21 16:25:20,210 : INFO : Pairs with unknown words ratio: 85.6%
Out[16]:
((0.10198070183634717, 0.47640746107165499),
 SpearmanrResult(correlation=0.081592940565853908, pvalue=0.56922382052578302),
 85.55240793201133)

Once again, good performance on Google's or WS-353 test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.

Storing and loading models

You can store/load models using the standard gensim methods:


In [17]:
from tempfile import mkstemp

fs, temp_path = mkstemp("gensim_temp")  # creates a temp file

model.save(temp_path)  # save the model


2017-12-21 16:25:20,225 : INFO : saving Word2Vec object under /tmp/tmpot5iTxgensim_temp, separately None
2017-12-21 16:25:20,228 : INFO : not storing attribute syn0norm
2017-12-21 16:25:20,230 : INFO : not storing attribute cum_table
2017-12-21 16:25:20,242 : INFO : saved /tmp/tmpot5iTxgensim_temp

In [18]:
new_model = gensim.models.Word2Vec.load(temp_path)  # open the model


2017-12-21 16:25:20,247 : INFO : loading Word2Vec object from /tmp/tmpot5iTxgensim_temp
2017-12-21 16:25:20,254 : INFO : loading wv recursively from /tmp/tmpot5iTxgensim_temp.wv.* with mmap=None
2017-12-21 16:25:20,255 : INFO : setting ignored attribute syn0norm to None
2017-12-21 16:25:20,255 : INFO : setting ignored attribute cum_table to None
2017-12-21 16:25:20,256 : INFO : loaded /tmp/tmpot5iTxgensim_temp

which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.

In addition, you can load models created by the original C tool, both using its text and binary formats:

  model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
  # using gzipped/bz2 input works too, no need to unzip:
  model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)

Online training / Resuming training

Advanced users can load a model and continue training it with more sentences and new vocabulary words:


In [19]:
model = gensim.models.Word2Vec.load(temp_path)
more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences']]
model.build_vocab(more_sentences, update=True)
model.train(more_sentences, total_examples=model.corpus_count, epochs=model.iter)

# cleaning up temp
os.close(fs)
os.remove(temp_path)


2017-12-21 16:25:20,266 : INFO : loading Word2Vec object from /tmp/tmpot5iTxgensim_temp
2017-12-21 16:25:20,273 : INFO : loading wv recursively from /tmp/tmpot5iTxgensim_temp.wv.* with mmap=None
2017-12-21 16:25:20,273 : INFO : setting ignored attribute syn0norm to None
2017-12-21 16:25:20,274 : INFO : setting ignored attribute cum_table to None
2017-12-21 16:25:20,275 : INFO : loaded /tmp/tmpot5iTxgensim_temp
2017-12-21 16:25:20,279 : INFO : collecting all words and their counts
2017-12-21 16:25:20,280 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-12-21 16:25:20,280 : INFO : collected 13 word types from a corpus of 13 raw words and 1 sentences
2017-12-21 16:25:20,281 : INFO : Updating model with new vocabulary
2017-12-21 16:25:20,281 : INFO : New added 0 unique words (0% of original 13) and increased the count of 0 pre-existing words (0% of original 13)
2017-12-21 16:25:20,282 : INFO : deleting the raw counts dictionary of 13 items
2017-12-21 16:25:20,282 : INFO : sample=0.001 downsamples 0 most-common words
2017-12-21 16:25:20,283 : INFO : downsampling leaves estimated 0 word corpus (0.0% of prior 0)
2017-12-21 16:25:20,283 : INFO : estimated required memory for 1723 words and 100 dimensions: 2239900 bytes
2017-12-21 16:25:20,287 : INFO : updating layer weights
2017-12-21 16:25:20,288 : INFO : training model with 4 workers on 1723 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-12-21 16:25:20,291 : INFO : worker thread finished; awaiting finish of 3 more threads
2017-12-21 16:25:20,292 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-12-21 16:25:20,292 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-12-21 16:25:20,293 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-12-21 16:25:20,293 : INFO : training on 65 raw words (30 effective words) took 0.0s, 10799 effective words/s
2017-12-21 16:25:20,294 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.

Note that it’s not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.

Using the model

Word2Vec supports several word similarity tasks out of the box:


In [20]:
model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1)


/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `most_similar` (Method will be removed in 4.0.0, use self.wv.most_similar() instead).
  """Entry point for launching an IPython kernel.
2017-12-21 16:25:20,299 : INFO : precomputing L2-norms of word weight vectors
Out[20]:
[('longer', 0.9912284016609192)]

In [21]:
model.doesnt_match("input is lunch he sentence cat".split())


/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `doesnt_match` (Method will be removed in 4.0.0, use self.wv.doesnt_match() instead).
  """Entry point for launching an IPython kernel.
2017-12-21 16:25:20,311 : WARNING : vectors for words set(['lunch', 'input', 'cat']) are not present in the model, ignoring these words
Out[21]:
'sentence'

In [22]:
print(model.similarity('human', 'party'))
print(model.similarity('tree', 'murder'))


0.999182520993
0.995758952957
/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `similarity` (Method will be removed in 4.0.0, use self.wv.similarity() instead).
  """Entry point for launching an IPython kernel.
/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:2: DeprecationWarning: Call to deprecated `similarity` (Method will be removed in 4.0.0, use self.wv.similarity() instead).
  

You can get the probability distribution for the center word given the context words as input:


In [23]:
print(model.predict_output_word(['emergency', 'beacon', 'received']))


[('more', 0.0010738152), ('continue', 0.00097854261), ('can', 0.00088260754), ('training', 0.00085614622), ('it', 0.00077729771), ('there', 0.00076619512), ('australia', 0.00075446483), ('government', 0.00074923009), ('three', 0.00074201578), ('say', 0.00073336047)]

The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.

If you need the raw output vectors in your application, you can access these either on a word-by-word basis:


In [24]:
model['tree']  # raw NumPy vector of a word


/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `__getitem__` (Method will be removed in 4.0.0, use self.wv.__getitem__() instead).
  """Entry point for launching an IPython kernel.
Out[24]:
array([  5.09738876e-03,   2.24877093e-02,  -2.87369397e-02,
        -7.89474510e-03,  -3.53376977e-02,  -4.06462997e-02,
         8.37074965e-03,  -9.41743553e-02,  -1.49697680e-02,
        -2.52366997e-02,   2.65039261e-02,  -2.40766741e-02,
        -6.85459673e-02,  -2.18667872e-02,  -2.14908309e-02,
         4.93087023e-02,  -2.92534381e-02,  -1.88865454e-03,
         2.26040259e-02,  -2.84134373e-02,  -6.83278441e-02,
        -6.75840862e-03,   7.07488284e-02,   5.21996431e-02,
         2.36152187e-02,   1.46892993e-02,  -4.65966389e-03,
         1.81521084e-02,  -1.66943893e-02,  -5.45500545e-04,
         9.81825942e-05,   7.39010796e-02,   8.24716035e-03,
        -3.30754719e-03,   2.59200167e-02,   2.25928240e-02,
        -4.78062779e-02,   1.68881603e-02,   1.27557423e-02,
        -7.06009716e-02,  -8.09376314e-02,   5.74318040e-03,
        -4.43559177e-02,  -3.11263874e-02,   3.13786902e-02,
        -5.85887060e-02,   4.01994959e-02,   4.16668272e-03,
        -1.61651354e-02,   4.03134711e-02,   1.64259598e-02,
         6.99962350e-03,  -3.78169157e-02,   7.13254418e-03,
         1.50061641e-02,  -2.01686379e-02,   5.82966506e-02,
        -2.78297253e-02,   1.81606133e-02,  -3.56090963e-02,
        -2.89950594e-02,  -4.97125871e-02,   1.93165317e-02,
        -2.90847234e-02,  -1.07406462e-02,  -6.75966665e-02,
        -8.05926248e-02,   3.87299024e-02,   5.84914126e-02,
        -2.87338771e-04,  -1.47654228e-02,   8.10218137e-03,
        -5.38245104e-02,   1.52460849e-02,  -4.90099788e-02,
        -5.80144748e-02,   3.85234654e-02,  -4.70485678e-03,
         4.69632484e-02,   4.56776805e-02,  -2.43359935e-02,
         3.39893550e-02,  -1.67205688e-02,   2.83701695e-04,
         1.79673471e-02,   1.03446953e-02,  -9.53995809e-02,
        -8.30710083e-02,   6.81236908e-02,  -3.47741581e-02,
        -6.30266443e-02,  -4.59022224e-02,  -4.83927876e-02,
        -5.33403922e-03,  -2.84888912e-02,  -2.93440577e-02,
        -2.53614448e-02,   2.14495976e-02,  -5.01872450e-02,
        -2.60770670e-03], dtype=float32)

…or en-masse as a 2D NumPy matrix from model.wv.syn0.

Training Loss Computation

The parameter compute_loss can be used to toggle computation of loss while training the Word2Vec model. The computed loss is stored in the model attribute running_training_loss and can be retrieved using the function get_latest_training_loss as follows :


In [25]:
# instantiating and training the Word2Vec model
model_with_loss = gensim.models.Word2Vec(sentences, min_count=1, compute_loss=True, hs=0, sg=1, seed=42)

# getting the training loss value
training_loss = model_with_loss.get_latest_training_loss()
print(training_loss)


2017-12-21 16:25:20,341 : INFO : collecting all words and their counts
2017-12-21 16:25:20,343 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-12-21 16:25:20,361 : INFO : collected 10186 word types from a corpus of 59890 raw words and 300 sentences
2017-12-21 16:25:20,362 : INFO : Loading a fresh vocabulary
2017-12-21 16:25:20,392 : INFO : min_count=1 retains 10186 unique words (100% of original 10186, drops 0)
2017-12-21 16:25:20,393 : INFO : min_count=1 leaves 59890 word corpus (100% of original 59890, drops 0)
2017-12-21 16:25:20,428 : INFO : deleting the raw counts dictionary of 10186 items
2017-12-21 16:25:20,429 : INFO : sample=0.001 downsamples 37 most-common words
2017-12-21 16:25:20,429 : INFO : downsampling leaves estimated 47231 word corpus (78.9% of prior 59890)
2017-12-21 16:25:20,430 : INFO : estimated required memory for 10186 words and 100 dimensions: 13241800 bytes
2017-12-21 16:25:20,454 : INFO : resetting layer weights
2017-12-21 16:25:20,556 : INFO : training model with 3 workers on 10186 vocabulary and 100 features, using sg=1 hs=0 sample=0.001 negative=5 window=5
2017-12-21 16:25:21,174 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-12-21 16:25:21,185 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-12-21 16:25:21,191 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-12-21 16:25:21,192 : INFO : training on 299450 raw words (235935 effective words) took 0.6s, 372397 effective words/s
1483200.25

Benchmarks to see effect of training loss compuation code on training time

We first download and setup the test data used for getting the benchmarks.


In [26]:
input_data_files = []

def setup_input_data():
    # check if test data already present
    if os.path.isfile('./text8') is False:

        # download and decompress 'text8' corpus
        import zipfile
        ! wget 'http://mattmahoney.net/dc/text8.zip'
        ! unzip 'text8.zip'
    
        # create 1 MB, 10 MB and 50 MB files
        ! head -c1000000 text8 > text8_1000000
        ! head -c10000000 text8 > text8_10000000
        ! head -c50000000 text8 > text8_50000000
                
    # add 25 KB test file
    input_data_files.append(os.path.join(os.getcwd(), '../../gensim/test/test_data/lee_background.cor'))

    # add 1 MB test file
    input_data_files.append(os.path.join(os.getcwd(), 'text8_1000000'))

    # add 10 MB test file
    input_data_files.append(os.path.join(os.getcwd(), 'text8_10000000'))

    # add 50 MB test file
    input_data_files.append(os.path.join(os.getcwd(), 'text8_50000000'))

    # add 100 MB test file
    input_data_files.append(os.path.join(os.getcwd(), 'text8'))

setup_input_data()
print(input_data_files)


['/home/markroxor/Documents/gensim/docs/notebooks/../../gensim/test/test_data/lee_background.cor', '/home/markroxor/Documents/gensim/docs/notebooks/text8_1000000', '/home/markroxor/Documents/gensim/docs/notebooks/text8_10000000', '/home/markroxor/Documents/gensim/docs/notebooks/text8_50000000', '/home/markroxor/Documents/gensim/docs/notebooks/text8']

We now compare the training time taken for different combinations of input data and model training parameters like hs and sg.


In [38]:
logging.getLogger().setLevel(logging.ERROR)

In [30]:
# using 25 KB and 50 MB files only for generating o/p -> comment next line for using all 5 test files
input_data_files = [input_data_files[0], input_data_files[-2]]
print(input_data_files)

import time
import numpy as np
import pandas as pd

train_time_values = []
seed_val = 42
sg_values = [0, 1]
hs_values = [0, 1]

for data_file in input_data_files:
    data = gensim.models.word2vec.LineSentence(data_file) 
    for sg_val in sg_values:
        for hs_val in hs_values:
            for loss_flag in [True, False]:
                time_taken_list = []
                for i in range(3):
                    start_time = time.time()
                    w2v_model = gensim.models.Word2Vec(data, compute_loss=loss_flag, sg=sg_val, hs=hs_val, seed=seed_val) 
                    time_taken_list.append(time.time() - start_time)

                time_taken_list = np.array(time_taken_list)
                time_mean = np.mean(time_taken_list)
                time_std = np.std(time_taken_list)
                train_time_values.append({'train_data': data_file, 'compute_loss': loss_flag, 'sg': sg_val, 'hs': hs_val, 'mean': time_mean, 'std': time_std})

train_times_table = pd.DataFrame(train_time_values)
train_times_table = train_times_table.sort_values(by=['train_data', 'sg', 'hs', 'compute_loss'], ascending=[False, False, True, False])
print(train_times_table)


['/home/markroxor/Documents/gensim/docs/notebooks/../../gensim/test/test_data/lee_background.cor', '/home/markroxor/Documents/gensim/docs/notebooks/../../gensim/test/test_data/lee_background.cor']
    compute_loss  hs      mean  sg       std  \
4           True   0  0.610034   1  0.022937   
12          True   0  0.593313   1  0.037120   
5          False   0  0.588043   1  0.027460   
13         False   0  0.578279   1  0.019227   
6           True   1  1.318451   1  0.100453   
14          True   1  1.309022   1  0.026008   
7          False   1  1.144407   1  0.120123   
15         False   1  1.362300   1  0.049005   
0           True   0  0.492458   0  0.079990   
8           True   0  0.452251   0  0.024318   
1          False   0  0.499348   0  0.130881   
9          False   0  0.469124   0  0.048385   
2           True   1  0.553494   0  0.033808   
10          True   1  0.604576   0  0.128907   
3          False   1  0.514230   0  0.019456   
11         False   1  0.477603   0  0.046651   

                                           train_data  
4   /home/markroxor/Documents/gensim/docs/notebook...  
12  /home/markroxor/Documents/gensim/docs/notebook...  
5   /home/markroxor/Documents/gensim/docs/notebook...  
13  /home/markroxor/Documents/gensim/docs/notebook...  
6   /home/markroxor/Documents/gensim/docs/notebook...  
14  /home/markroxor/Documents/gensim/docs/notebook...  
7   /home/markroxor/Documents/gensim/docs/notebook...  
15  /home/markroxor/Documents/gensim/docs/notebook...  
0   /home/markroxor/Documents/gensim/docs/notebook...  
8   /home/markroxor/Documents/gensim/docs/notebook...  
1   /home/markroxor/Documents/gensim/docs/notebook...  
9   /home/markroxor/Documents/gensim/docs/notebook...  
2   /home/markroxor/Documents/gensim/docs/notebook...  
10  /home/markroxor/Documents/gensim/docs/notebook...  
3   /home/markroxor/Documents/gensim/docs/notebook...  
11  /home/markroxor/Documents/gensim/docs/notebook...  

Adding Word2Vec "model to dict" method to production pipeline

Suppose, we still want more performance improvement in production. One good way is to cache all the similar words in a dictionary. So that next time when we get the similar query word, we'll search it first in the dict. And if it's a hit then we will show the result directly from the dictionary. otherwise we will query the word and then cache it so that it doesn't miss next time.


In [31]:
logging.getLogger().setLevel(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

In [39]:
most_similars_precalc = {word : model.wv.most_similar(word) for word in model.wv.index2word}
for i, (key, value) in enumerate(most_similars_precalc.iteritems()):
    if i==3:
        break
    print key, value


four [('over', 0.9999380111694336), ('world', 0.9999357461929321), ('after', 0.9999346137046814), ('local', 0.9999341368675232), ('on', 0.9999333024024963), ('last', 0.9999300241470337), ('into', 0.9999298453330994), ('a', 0.9999290704727173), ('at', 0.9999290108680725), ('from', 0.99992835521698)]
jihad [('on', 0.9995617866516113), ('gaza', 0.9995616674423218), ('and', 0.9995588064193726), ('palestinian', 0.9995542168617249), ('in', 0.999552845954895), ('former', 0.9995408654212952), ('from', 0.999538242816925), ('were', 0.9995372295379639), ('security', 0.9995347857475281), ('an', 0.9995279908180237)]
captain [('are', 0.9998434782028198), ('any', 0.9998313784599304), ('out', 0.9998288154602051), ('now', 0.9998278617858887), ('over', 0.999825656414032), ('have', 0.9998254776000977), ('australian', 0.999824583530426), ('is', 0.999819815158844), ('including', 0.9998170137405396), ('at', 0.99981689453125)]

Comparison with and without caching

for time being lets take 4 words randomly


In [40]:
import time
words = ['voted','few','their','around']

Without caching


In [41]:
start = time.time()
for word in words:
    result = model.wv.most_similar(word)
    print(result)
end = time.time()
print(end-start)


[('action', 0.998902440071106), ('would', 0.998866081237793), ('could', 0.9988579154014587), ('need', 0.9988564848899841), ('will', 0.9988539218902588), ('it', 0.9988477826118469), ('expected', 0.998836874961853), ('legal', 0.9988292455673218), ('we', 0.9988272786140442), ('called', 0.9988245368003845)]
[('from', 0.9997947216033936), ('which', 0.9997938871383667), ('australian', 0.9997910261154175), ('a', 0.9997884035110474), ('police', 0.9997864365577698), ('told', 0.9997841119766235), ('his', 0.9997839331626892), ('with', 0.9997835159301758), ('if', 0.999783456325531), ('be', 0.999782383441925)]
[('up', 0.9999563097953796), ('last', 0.9999551177024841), ('on', 0.9999517798423767), ('over', 0.9999500513076782), ('are', 0.9999494552612305), ('also', 0.9999493956565857), ('and', 0.9999493360519409), ('had', 0.9999492168426514), ('as', 0.9999483227729797), ('an', 0.9999469518661499)]
[('over', 0.9999304413795471), ('their', 0.9999291896820068), ('three', 0.9999284148216248), ('on', 0.9999280571937561), ('two', 0.9999274015426636), ('last', 0.9999268054962158), ('have', 0.9999215602874756), ('as', 0.999921441078186), ('us', 0.9999210834503174), ('from', 0.9999209046363831)]
0.00567579269409

Now with caching


In [42]:
start = time.time()
for word in words:
    if 'voted' in most_similars_precalc:
        result = most_similars_precalc[word]
        print(result)
    else:
        result = model.wv.most_similar(word)
        most_similars_precalc[word] = result
        print(result)
    
end = time.time()
print(end-start)


[('action', 0.998902440071106), ('would', 0.998866081237793), ('could', 0.9988579154014587), ('need', 0.9988564848899841), ('will', 0.9988539218902588), ('it', 0.9988477826118469), ('expected', 0.998836874961853), ('legal', 0.9988292455673218), ('we', 0.9988272786140442), ('called', 0.9988245368003845)]
[('from', 0.9997947216033936), ('which', 0.9997938871383667), ('australian', 0.9997910261154175), ('a', 0.9997884035110474), ('police', 0.9997864365577698), ('told', 0.9997841119766235), ('his', 0.9997839331626892), ('with', 0.9997835159301758), ('if', 0.999783456325531), ('be', 0.999782383441925)]
[('up', 0.9999563097953796), ('last', 0.9999551177024841), ('on', 0.9999517798423767), ('over', 0.9999500513076782), ('are', 0.9999494552612305), ('also', 0.9999493956565857), ('and', 0.9999493360519409), ('had', 0.9999492168426514), ('as', 0.9999483227729797), ('an', 0.9999469518661499)]
[('over', 0.9999304413795471), ('their', 0.9999291896820068), ('three', 0.9999284148216248), ('on', 0.9999280571937561), ('two', 0.9999274015426636), ('last', 0.9999268054962158), ('have', 0.9999215602874756), ('as', 0.999921441078186), ('us', 0.9999210834503174), ('from', 0.9999209046363831)]
0.000878810882568

Clearly you can see the improvement but this difference will be even larger when we take more words in the consideration.

Visualising the Word Embeddings

The word embeddings made by the model can be visualised by reducing dimensionality of the words to 2 dimensions using tSNE.

Visualisations can be used to notice semantic and syntactic trends in the data.

Example: Semantic- words like cat, dog, cow, etc. have a tendency to lie close by Syntactic- words like run, running or cut, cutting lie close together. Vector relations like vKing - vMan = vQueen - vWoman can also be noticed.

Additional dependencies :

  • sklearn
  • numpy
  • plotly

The function below can be used to plot the embeddings in an ipython notebook. It requires the model as the necessary parameter. If you don't have the model, you can load it by

model = gensim.models.Word2Vec.load('path/to/model')

If you don't want to plot inside a notebook, set the plot_in_notebook parameter to False.

Note: the model used for the visualisation is trained on a small corpus. Thus some of the relations might not be so clear

Beware : This sort dimension reduction comes at the cost of loss of information.


In [43]:
from sklearn.decomposition import IncrementalPCA    # inital reduction
from sklearn.manifold import TSNE                   # final reduction
import numpy as np                                  # array handling

from plotly.offline import init_notebook_mode, iplot, plot
import plotly.graph_objs as go

def reduce_dimensions(model, plot_in_notebook = True):

    num_dimensions = 2  # final num dimensions (2D, 3D, etc)

    vectors = []        # positions in vector space
    labels = []         # keep track of words to label our data again later
    for word in model.wv.vocab:
        vectors.append(model[word])
        labels.append(word)


    # convert both lists into numpy vectors for reduction
    vectors = np.asarray(vectors)
    labels = np.asarray(labels)
    
    # reduce using t-SNE
    vectors = np.asarray(vectors)
    logging.info('starting tSNE dimensionality reduction. This may take some time.')
    tsne = TSNE(n_components=num_dimensions, random_state=0)
    vectors = tsne.fit_transform(vectors)

    x_vals = [v[0] for v in vectors]
    y_vals = [v[1] for v in vectors]
        
    # Create a trace
    trace = go.Scatter(
        x=x_vals,
        y=y_vals,
        mode='text',
        text=labels
        )
    
    data = [trace]
    
    logging.info('All done. Plotting.')
    
    if plot_in_notebook:
        init_notebook_mode(connected=True)
        iplot(data, filename='word-embedding-plot')
    else:
        plot(data, filename='word-embedding-plot.html')

In [44]:
reduce_dimensions(model)


/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:15: DeprecationWarning:

Call to deprecated `__getitem__` (Method will be removed in 4.0.0, use self.wv.__getitem__() instead).

Conclusion

In this tutorial we learned how to train word2vec models on your custom data and also how to evaluate it. Hope that you too will find this popular tool useful in your Machine Learning tasks!

Full word2vec API docs here; get gensim here. Original C toolkit and word2vec papers by Google here.