Word2Vec Tutorial

In case you missed the buzz, word2vec is a widely featured as a member of the “new wave” of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs”.

Word2vec is very useful in automatic text tagging, recommender systems and machine translation.

Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.

This tutorial

In this tutorial you will learn how to train and evaluate word2vec models on your business data.

Preparing the Input

Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence is a list of words (utf8 strings):


In [1]:
# import modules & set up logging
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)


Using TensorFlow backend.

In [2]:
sentences = [['first', 'sentence'], ['second', 'sentence']]
# train word2vec on the two sentences
model = gensim.models.Word2Vec(sentences, min_count=1)


2017-06-29 10:36:15,148 : INFO : collecting all words and their counts
2017-06-29 10:36:15,149 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-06-29 10:36:15,150 : INFO : collected 3 word types from a corpus of 4 raw words and 2 sentences
2017-06-29 10:36:15,152 : INFO : Loading a fresh vocabulary
2017-06-29 10:36:15,158 : INFO : min_count=1 retains 3 unique words (100% of original 3, drops 0)
2017-06-29 10:36:15,159 : INFO : min_count=1 leaves 4 word corpus (100% of original 4, drops 0)
2017-06-29 10:36:15,160 : INFO : deleting the raw counts dictionary of 3 items
2017-06-29 10:36:15,167 : INFO : sample=0.001 downsamples 3 most-common words
2017-06-29 10:36:15,168 : INFO : downsampling leaves estimated 0 word corpus (5.7% of prior 4)
2017-06-29 10:36:15,169 : INFO : estimated required memory for 3 words and 100 dimensions: 3900 bytes
2017-06-29 10:36:15,170 : INFO : resetting layer weights
2017-06-29 10:36:15,172 : INFO : training model with 3 workers on 3 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-06-29 10:36:15,176 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-06-29 10:36:15,179 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-06-29 10:36:15,183 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-06-29 10:36:15,184 : INFO : training on 20 raw words (0 effective words) took 0.0s, 0 effective words/s
2017-06-29 10:36:15,185 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.

Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentence…

For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line:


In [3]:
# create some toy data to use with the following example
import smart_open, os

if not os.path.exists('./data/'):
    os.makedirs('./data/')

filenames = ['./data/f1.txt', './data/f2.txt']

for i, fname in enumerate(filenames):
    with smart_open.smart_open(fname, 'w') as fout:
        for line in sentences[i]:
            fout.write(line + '\n')

In [4]:
class MySentences(object):
    def __init__(self, dirname):
        self.dirname = dirname
 
    def __iter__(self):
        for fname in os.listdir(self.dirname):
            for line in open(os.path.join(self.dirname, fname)):
                yield line.split()

In [5]:
sentences = MySentences('./data/') # a memory-friendly iterator
print(list(sentences))


[['first'], ['sentence'], ['second'], ['sentence']]

In [6]:
# generate the Word2Vec model
model = gensim.models.Word2Vec(sentences, min_count=1)


2017-06-29 10:36:18,159 : INFO : collecting all words and their counts
2017-06-29 10:36:18,161 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-06-29 10:36:18,162 : INFO : collected 3 word types from a corpus of 4 raw words and 4 sentences
2017-06-29 10:36:18,163 : INFO : Loading a fresh vocabulary
2017-06-29 10:36:18,164 : INFO : min_count=1 retains 3 unique words (100% of original 3, drops 0)
2017-06-29 10:36:18,165 : INFO : min_count=1 leaves 4 word corpus (100% of original 4, drops 0)
2017-06-29 10:36:18,166 : INFO : deleting the raw counts dictionary of 3 items
2017-06-29 10:36:18,167 : INFO : sample=0.001 downsamples 3 most-common words
2017-06-29 10:36:18,168 : INFO : downsampling leaves estimated 0 word corpus (5.7% of prior 4)
2017-06-29 10:36:18,169 : INFO : estimated required memory for 3 words and 100 dimensions: 3900 bytes
2017-06-29 10:36:18,170 : INFO : resetting layer weights
2017-06-29 10:36:18,171 : INFO : training model with 3 workers on 3 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-06-29 10:36:18,183 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-06-29 10:36:18,184 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-06-29 10:36:18,185 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-06-29 10:36:18,186 : INFO : training on 20 raw words (0 effective words) took 0.0s, 0 effective words/s
2017-06-29 10:36:18,186 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

In [7]:
print(model)
print(model.wv.vocab)


Word2Vec(vocab=3, size=100, alpha=0.025)
{'second': <gensim.models.keyedvectors.Vocab object at 0x7f1c573ce9d0>, 'first': <gensim.models.keyedvectors.Vocab object at 0x7f1c0d056b50>, 'sentence': <gensim.models.keyedvectors.Vocab object at 0x7f1c0d0569d0>}

Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences iterator and word2vec doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.

Note to advanced users: calling Word2Vec(sentences, iter=1) will run two passes over the sentences iterator. In general it runs iter+1 passes. By the way, the default value is iter=5 to comply with Google's word2vec in C language.

  1. The first pass collects words and their frequencies to build an internal dictionary tree structure.
  2. The second pass trains the neural model.

These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and you’re able to initialize the vocabulary some other way:


In [8]:
# build the same model, making the 2 steps explicit
new_model = gensim.models.Word2Vec(min_count=1)  # an empty model, no training
new_model.build_vocab(sentences)                 # can be a non-repeatable, 1-pass generator     
new_model.train(sentences, total_examples=new_model.corpus_count, epochs=new_model.iter)                       
# can be a non-repeatable, 1-pass generator


2017-06-29 10:36:19,845 : INFO : collecting all words and their counts
2017-06-29 10:36:19,847 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-06-29 10:36:19,848 : INFO : collected 3 word types from a corpus of 4 raw words and 4 sentences
2017-06-29 10:36:19,849 : INFO : Loading a fresh vocabulary
2017-06-29 10:36:19,849 : INFO : min_count=1 retains 3 unique words (100% of original 3, drops 0)
2017-06-29 10:36:19,850 : INFO : min_count=1 leaves 4 word corpus (100% of original 4, drops 0)
2017-06-29 10:36:19,851 : INFO : deleting the raw counts dictionary of 3 items
2017-06-29 10:36:19,851 : INFO : sample=0.001 downsamples 3 most-common words
2017-06-29 10:36:19,852 : INFO : downsampling leaves estimated 0 word corpus (5.7% of prior 4)
2017-06-29 10:36:19,853 : INFO : estimated required memory for 3 words and 100 dimensions: 3900 bytes
2017-06-29 10:36:19,853 : INFO : resetting layer weights
2017-06-29 10:36:19,854 : INFO : training model with 3 workers on 3 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-06-29 10:36:19,856 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-06-29 10:36:19,857 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-06-29 10:36:19,858 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-06-29 10:36:19,858 : INFO : training on 20 raw words (0 effective words) took 0.0s, 0 effective words/s
2017-06-29 10:36:19,859 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay
Out[8]:
0

In [9]:
print(new_model)
print(model.wv.vocab)


Word2Vec(vocab=3, size=100, alpha=0.025)
{'second': <gensim.models.keyedvectors.Vocab object at 0x7f1c573ce9d0>, 'first': <gensim.models.keyedvectors.Vocab object at 0x7f1c0d056b50>, 'sentence': <gensim.models.keyedvectors.Vocab object at 0x7f1c0d0569d0>}

More data would be nice

For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim):


In [10]:
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep
lee_train_file = test_data_dir + 'lee_background.cor'

In [11]:
class MyText(object):
    def __iter__(self):
        for line in open(lee_train_file):
            # assume there's one document per line, tokens separated by whitespace
            yield line.lower().split()

sentences = MyText()

print(sentences)


<__main__.MyText object at 0x7f1c0cfe6a10>

Training

Word2Vec accepts several parameters that affect both training speed and quality.

min_count

min_count is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them:


In [12]:
# default value of min_count=5
model = gensim.models.Word2Vec(sentences, min_count=10)


2017-06-29 10:36:22,567 : INFO : collecting all words and their counts
2017-06-29 10:36:22,568 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-06-29 10:36:22,599 : INFO : collected 10186 word types from a corpus of 59890 raw words and 300 sentences
2017-06-29 10:36:22,601 : INFO : Loading a fresh vocabulary
2017-06-29 10:36:22,613 : INFO : min_count=10 retains 806 unique words (7% of original 10186, drops 9380)
2017-06-29 10:36:22,614 : INFO : min_count=10 leaves 40964 word corpus (68% of original 59890, drops 18926)
2017-06-29 10:36:22,621 : INFO : deleting the raw counts dictionary of 10186 items
2017-06-29 10:36:22,623 : INFO : sample=0.001 downsamples 54 most-common words
2017-06-29 10:36:22,625 : INFO : downsampling leaves estimated 26224 word corpus (64.0% of prior 40964)
2017-06-29 10:36:22,629 : INFO : estimated required memory for 806 words and 100 dimensions: 1047800 bytes
2017-06-29 10:36:22,634 : INFO : resetting layer weights
2017-06-29 10:36:22,651 : INFO : training model with 3 workers on 806 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-06-29 10:36:22,791 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-06-29 10:36:22,792 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-06-29 10:36:22,793 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-06-29 10:36:22,795 : INFO : training on 299450 raw words (130961 effective words) took 0.1s, 928696 effective words/s

size

size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto.

Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.


In [13]:
# default value of size=100
model = gensim.models.Word2Vec(sentences, size=200)


2017-06-29 10:36:25,482 : INFO : collecting all words and their counts
2017-06-29 10:36:25,484 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-06-29 10:36:25,505 : INFO : collected 10186 word types from a corpus of 59890 raw words and 300 sentences
2017-06-29 10:36:25,506 : INFO : Loading a fresh vocabulary
2017-06-29 10:36:25,517 : INFO : min_count=5 retains 1723 unique words (16% of original 10186, drops 8463)
2017-06-29 10:36:25,519 : INFO : min_count=5 leaves 46858 word corpus (78% of original 59890, drops 13032)
2017-06-29 10:36:25,531 : INFO : deleting the raw counts dictionary of 10186 items
2017-06-29 10:36:25,533 : INFO : sample=0.001 downsamples 49 most-common words
2017-06-29 10:36:25,534 : INFO : downsampling leaves estimated 32849 word corpus (70.1% of prior 46858)
2017-06-29 10:36:25,535 : INFO : estimated required memory for 1723 words and 200 dimensions: 3618300 bytes
2017-06-29 10:36:25,542 : INFO : resetting layer weights
2017-06-29 10:36:25,587 : INFO : training model with 3 workers on 1723 vocabulary and 200 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-06-29 10:36:25,797 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-06-29 10:36:25,802 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-06-29 10:36:25,809 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-06-29 10:36:25,813 : INFO : training on 299450 raw words (164316 effective words) took 0.2s, 748937 effective words/s

workers

workers, the last of the major parameters (full list here) is for training parallelization, to speed up training:


In [14]:
# default value of workers=3 (tutorial says 1...)
model = gensim.models.Word2Vec(sentences, workers=4)


2017-06-29 10:36:27,068 : INFO : collecting all words and their counts
2017-06-29 10:36:27,069 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-06-29 10:36:27,090 : INFO : collected 10186 word types from a corpus of 59890 raw words and 300 sentences
2017-06-29 10:36:27,091 : INFO : Loading a fresh vocabulary
2017-06-29 10:36:27,109 : INFO : min_count=5 retains 1723 unique words (16% of original 10186, drops 8463)
2017-06-29 10:36:27,110 : INFO : min_count=5 leaves 46858 word corpus (78% of original 59890, drops 13032)
2017-06-29 10:36:27,118 : INFO : deleting the raw counts dictionary of 10186 items
2017-06-29 10:36:27,122 : INFO : sample=0.001 downsamples 49 most-common words
2017-06-29 10:36:27,123 : INFO : downsampling leaves estimated 32849 word corpus (70.1% of prior 46858)
2017-06-29 10:36:27,125 : INFO : estimated required memory for 1723 words and 100 dimensions: 2239900 bytes
2017-06-29 10:36:27,130 : INFO : resetting layer weights
2017-06-29 10:36:27,162 : INFO : training model with 4 workers on 1723 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-06-29 10:36:27,323 : INFO : worker thread finished; awaiting finish of 3 more threads
2017-06-29 10:36:27,325 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-06-29 10:36:27,330 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-06-29 10:36:27,331 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-06-29 10:36:27,334 : INFO : training on 299450 raw words (163999 effective words) took 0.2s, 963210 effective words/s
2017-06-29 10:36:27,337 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

The workers parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow).

Memory

At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).

Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.

There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.

Evaluating

Word2Vec training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application.

Google has released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder.

For example a syntactic analogy of comparative type is bad:worse;good:?. There are total of 9 types of syntactic comparisons in the dataset like plural nouns and nouns of opposite meaning.

The semantic questions contain five types of semantic analogies, such as capital cities (Paris:France;Tokyo:?) or family members (brother:sister;dad:?).

Gensim supports the same evaluation set, in exactly the same format:


In [15]:
model.accuracy('./datasets/questions-words.txt')


2017-06-29 10:36:30,524 : INFO : precomputing L2-norms of word weight vectors
2017-06-29 10:36:30,529 : INFO : family: 0.0% (0/2)
2017-06-29 10:36:30,583 : INFO : gram3-comparative: 0.0% (0/12)
2017-06-29 10:36:30,618 : INFO : gram4-superlative: 0.0% (0/12)
2017-06-29 10:36:30,653 : INFO : gram5-present-participle: 5.0% (1/20)
2017-06-29 10:36:30,701 : INFO : gram6-nationality-adjective: 0.0% (0/20)
2017-06-29 10:36:30,741 : INFO : gram7-past-tense: 0.0% (0/20)
2017-06-29 10:36:30,781 : INFO : gram8-plural: 0.0% (0/12)
2017-06-29 10:36:30,792 : INFO : total: 1.0% (1/98)
Out[15]:
[{'correct': [], 'incorrect': [], 'section': u'capital-common-countries'},
 {'correct': [], 'incorrect': [], 'section': u'capital-world'},
 {'correct': [], 'incorrect': [], 'section': u'currency'},
 {'correct': [], 'incorrect': [], 'section': u'city-in-state'},
 {'correct': [],
  'incorrect': [(u'HE', u'SHE', u'HIS', u'HER'),
   (u'HIS', u'HER', u'HE', u'SHE')],
  'section': u'family'},
 {'correct': [], 'incorrect': [], 'section': u'gram1-adjective-to-adverb'},
 {'correct': [], 'incorrect': [], 'section': u'gram2-opposite'},
 {'correct': [],
  'incorrect': [(u'GOOD', u'BETTER', u'GREAT', u'GREATER'),
   (u'GOOD', u'BETTER', u'LONG', u'LONGER'),
   (u'GOOD', u'BETTER', u'LOW', u'LOWER'),
   (u'GREAT', u'GREATER', u'LONG', u'LONGER'),
   (u'GREAT', u'GREATER', u'LOW', u'LOWER'),
   (u'GREAT', u'GREATER', u'GOOD', u'BETTER'),
   (u'LONG', u'LONGER', u'LOW', u'LOWER'),
   (u'LONG', u'LONGER', u'GOOD', u'BETTER'),
   (u'LONG', u'LONGER', u'GREAT', u'GREATER'),
   (u'LOW', u'LOWER', u'GOOD', u'BETTER'),
   (u'LOW', u'LOWER', u'GREAT', u'GREATER'),
   (u'LOW', u'LOWER', u'LONG', u'LONGER')],
  'section': u'gram3-comparative'},
 {'correct': [],
  'incorrect': [(u'BIG', u'BIGGEST', u'GOOD', u'BEST'),
   (u'BIG', u'BIGGEST', u'GREAT', u'GREATEST'),
   (u'BIG', u'BIGGEST', u'LARGE', u'LARGEST'),
   (u'GOOD', u'BEST', u'GREAT', u'GREATEST'),
   (u'GOOD', u'BEST', u'LARGE', u'LARGEST'),
   (u'GOOD', u'BEST', u'BIG', u'BIGGEST'),
   (u'GREAT', u'GREATEST', u'LARGE', u'LARGEST'),
   (u'GREAT', u'GREATEST', u'BIG', u'BIGGEST'),
   (u'GREAT', u'GREATEST', u'GOOD', u'BEST'),
   (u'LARGE', u'LARGEST', u'BIG', u'BIGGEST'),
   (u'LARGE', u'LARGEST', u'GOOD', u'BEST'),
   (u'LARGE', u'LARGEST', u'GREAT', u'GREATEST')],
  'section': u'gram4-superlative'},
 {'correct': [(u'LOOK', u'LOOKING', u'SAY', u'SAYING')],
  'incorrect': [(u'GO', u'GOING', u'LOOK', u'LOOKING'),
   (u'GO', u'GOING', u'PLAY', u'PLAYING'),
   (u'GO', u'GOING', u'RUN', u'RUNNING'),
   (u'GO', u'GOING', u'SAY', u'SAYING'),
   (u'LOOK', u'LOOKING', u'PLAY', u'PLAYING'),
   (u'LOOK', u'LOOKING', u'RUN', u'RUNNING'),
   (u'LOOK', u'LOOKING', u'GO', u'GOING'),
   (u'PLAY', u'PLAYING', u'RUN', u'RUNNING'),
   (u'PLAY', u'PLAYING', u'SAY', u'SAYING'),
   (u'PLAY', u'PLAYING', u'GO', u'GOING'),
   (u'PLAY', u'PLAYING', u'LOOK', u'LOOKING'),
   (u'RUN', u'RUNNING', u'SAY', u'SAYING'),
   (u'RUN', u'RUNNING', u'GO', u'GOING'),
   (u'RUN', u'RUNNING', u'LOOK', u'LOOKING'),
   (u'RUN', u'RUNNING', u'PLAY', u'PLAYING'),
   (u'SAY', u'SAYING', u'GO', u'GOING'),
   (u'SAY', u'SAYING', u'LOOK', u'LOOKING'),
   (u'SAY', u'SAYING', u'PLAY', u'PLAYING'),
   (u'SAY', u'SAYING', u'RUN', u'RUNNING')],
  'section': u'gram5-present-participle'},
 {'correct': [],
  'incorrect': [(u'AUSTRALIA', u'AUSTRALIAN', u'FRANCE', u'FRENCH'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'INDIA', u'INDIAN'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'ISRAEL', u'ISRAELI'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'SWITZERLAND', u'SWISS'),
   (u'FRANCE', u'FRENCH', u'INDIA', u'INDIAN'),
   (u'FRANCE', u'FRENCH', u'ISRAEL', u'ISRAELI'),
   (u'FRANCE', u'FRENCH', u'SWITZERLAND', u'SWISS'),
   (u'FRANCE', u'FRENCH', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'INDIA', u'INDIAN', u'ISRAEL', u'ISRAELI'),
   (u'INDIA', u'INDIAN', u'SWITZERLAND', u'SWISS'),
   (u'INDIA', u'INDIAN', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'INDIA', u'INDIAN', u'FRANCE', u'FRENCH'),
   (u'ISRAEL', u'ISRAELI', u'SWITZERLAND', u'SWISS'),
   (u'ISRAEL', u'ISRAELI', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'ISRAEL', u'ISRAELI', u'FRANCE', u'FRENCH'),
   (u'ISRAEL', u'ISRAELI', u'INDIA', u'INDIAN'),
   (u'SWITZERLAND', u'SWISS', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'SWITZERLAND', u'SWISS', u'FRANCE', u'FRENCH'),
   (u'SWITZERLAND', u'SWISS', u'INDIA', u'INDIAN'),
   (u'SWITZERLAND', u'SWISS', u'ISRAEL', u'ISRAELI')],
  'section': u'gram6-nationality-adjective'},
 {'correct': [],
  'incorrect': [(u'GOING', u'WENT', u'PAYING', u'PAID'),
   (u'GOING', u'WENT', u'PLAYING', u'PLAYED'),
   (u'GOING', u'WENT', u'SAYING', u'SAID'),
   (u'GOING', u'WENT', u'TAKING', u'TOOK'),
   (u'PAYING', u'PAID', u'PLAYING', u'PLAYED'),
   (u'PAYING', u'PAID', u'SAYING', u'SAID'),
   (u'PAYING', u'PAID', u'TAKING', u'TOOK'),
   (u'PAYING', u'PAID', u'GOING', u'WENT'),
   (u'PLAYING', u'PLAYED', u'SAYING', u'SAID'),
   (u'PLAYING', u'PLAYED', u'TAKING', u'TOOK'),
   (u'PLAYING', u'PLAYED', u'GOING', u'WENT'),
   (u'PLAYING', u'PLAYED', u'PAYING', u'PAID'),
   (u'SAYING', u'SAID', u'TAKING', u'TOOK'),
   (u'SAYING', u'SAID', u'GOING', u'WENT'),
   (u'SAYING', u'SAID', u'PAYING', u'PAID'),
   (u'SAYING', u'SAID', u'PLAYING', u'PLAYED'),
   (u'TAKING', u'TOOK', u'GOING', u'WENT'),
   (u'TAKING', u'TOOK', u'PAYING', u'PAID'),
   (u'TAKING', u'TOOK', u'PLAYING', u'PLAYED'),
   (u'TAKING', u'TOOK', u'SAYING', u'SAID')],
  'section': u'gram7-past-tense'},
 {'correct': [],
  'incorrect': [(u'BUILDING', u'BUILDINGS', u'CAR', u'CARS'),
   (u'BUILDING', u'BUILDINGS', u'CHILD', u'CHILDREN'),
   (u'BUILDING', u'BUILDINGS', u'MAN', u'MEN'),
   (u'CAR', u'CARS', u'CHILD', u'CHILDREN'),
   (u'CAR', u'CARS', u'MAN', u'MEN'),
   (u'CAR', u'CARS', u'BUILDING', u'BUILDINGS'),
   (u'CHILD', u'CHILDREN', u'MAN', u'MEN'),
   (u'CHILD', u'CHILDREN', u'BUILDING', u'BUILDINGS'),
   (u'CHILD', u'CHILDREN', u'CAR', u'CARS'),
   (u'MAN', u'MEN', u'BUILDING', u'BUILDINGS'),
   (u'MAN', u'MEN', u'CAR', u'CARS'),
   (u'MAN', u'MEN', u'CHILD', u'CHILDREN')],
  'section': u'gram8-plural'},
 {'correct': [], 'incorrect': [], 'section': u'gram9-plural-verbs'},
 {'correct': [(u'LOOK', u'LOOKING', u'SAY', u'SAYING')],
  'incorrect': [(u'HE', u'SHE', u'HIS', u'HER'),
   (u'HIS', u'HER', u'HE', u'SHE'),
   (u'GOOD', u'BETTER', u'GREAT', u'GREATER'),
   (u'GOOD', u'BETTER', u'LONG', u'LONGER'),
   (u'GOOD', u'BETTER', u'LOW', u'LOWER'),
   (u'GREAT', u'GREATER', u'LONG', u'LONGER'),
   (u'GREAT', u'GREATER', u'LOW', u'LOWER'),
   (u'GREAT', u'GREATER', u'GOOD', u'BETTER'),
   (u'LONG', u'LONGER', u'LOW', u'LOWER'),
   (u'LONG', u'LONGER', u'GOOD', u'BETTER'),
   (u'LONG', u'LONGER', u'GREAT', u'GREATER'),
   (u'LOW', u'LOWER', u'GOOD', u'BETTER'),
   (u'LOW', u'LOWER', u'GREAT', u'GREATER'),
   (u'LOW', u'LOWER', u'LONG', u'LONGER'),
   (u'BIG', u'BIGGEST', u'GOOD', u'BEST'),
   (u'BIG', u'BIGGEST', u'GREAT', u'GREATEST'),
   (u'BIG', u'BIGGEST', u'LARGE', u'LARGEST'),
   (u'GOOD', u'BEST', u'GREAT', u'GREATEST'),
   (u'GOOD', u'BEST', u'LARGE', u'LARGEST'),
   (u'GOOD', u'BEST', u'BIG', u'BIGGEST'),
   (u'GREAT', u'GREATEST', u'LARGE', u'LARGEST'),
   (u'GREAT', u'GREATEST', u'BIG', u'BIGGEST'),
   (u'GREAT', u'GREATEST', u'GOOD', u'BEST'),
   (u'LARGE', u'LARGEST', u'BIG', u'BIGGEST'),
   (u'LARGE', u'LARGEST', u'GOOD', u'BEST'),
   (u'LARGE', u'LARGEST', u'GREAT', u'GREATEST'),
   (u'GO', u'GOING', u'LOOK', u'LOOKING'),
   (u'GO', u'GOING', u'PLAY', u'PLAYING'),
   (u'GO', u'GOING', u'RUN', u'RUNNING'),
   (u'GO', u'GOING', u'SAY', u'SAYING'),
   (u'LOOK', u'LOOKING', u'PLAY', u'PLAYING'),
   (u'LOOK', u'LOOKING', u'RUN', u'RUNNING'),
   (u'LOOK', u'LOOKING', u'GO', u'GOING'),
   (u'PLAY', u'PLAYING', u'RUN', u'RUNNING'),
   (u'PLAY', u'PLAYING', u'SAY', u'SAYING'),
   (u'PLAY', u'PLAYING', u'GO', u'GOING'),
   (u'PLAY', u'PLAYING', u'LOOK', u'LOOKING'),
   (u'RUN', u'RUNNING', u'SAY', u'SAYING'),
   (u'RUN', u'RUNNING', u'GO', u'GOING'),
   (u'RUN', u'RUNNING', u'LOOK', u'LOOKING'),
   (u'RUN', u'RUNNING', u'PLAY', u'PLAYING'),
   (u'SAY', u'SAYING', u'GO', u'GOING'),
   (u'SAY', u'SAYING', u'LOOK', u'LOOKING'),
   (u'SAY', u'SAYING', u'PLAY', u'PLAYING'),
   (u'SAY', u'SAYING', u'RUN', u'RUNNING'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'FRANCE', u'FRENCH'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'INDIA', u'INDIAN'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'ISRAEL', u'ISRAELI'),
   (u'AUSTRALIA', u'AUSTRALIAN', u'SWITZERLAND', u'SWISS'),
   (u'FRANCE', u'FRENCH', u'INDIA', u'INDIAN'),
   (u'FRANCE', u'FRENCH', u'ISRAEL', u'ISRAELI'),
   (u'FRANCE', u'FRENCH', u'SWITZERLAND', u'SWISS'),
   (u'FRANCE', u'FRENCH', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'INDIA', u'INDIAN', u'ISRAEL', u'ISRAELI'),
   (u'INDIA', u'INDIAN', u'SWITZERLAND', u'SWISS'),
   (u'INDIA', u'INDIAN', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'INDIA', u'INDIAN', u'FRANCE', u'FRENCH'),
   (u'ISRAEL', u'ISRAELI', u'SWITZERLAND', u'SWISS'),
   (u'ISRAEL', u'ISRAELI', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'ISRAEL', u'ISRAELI', u'FRANCE', u'FRENCH'),
   (u'ISRAEL', u'ISRAELI', u'INDIA', u'INDIAN'),
   (u'SWITZERLAND', u'SWISS', u'AUSTRALIA', u'AUSTRALIAN'),
   (u'SWITZERLAND', u'SWISS', u'FRANCE', u'FRENCH'),
   (u'SWITZERLAND', u'SWISS', u'INDIA', u'INDIAN'),
   (u'SWITZERLAND', u'SWISS', u'ISRAEL', u'ISRAELI'),
   (u'GOING', u'WENT', u'PAYING', u'PAID'),
   (u'GOING', u'WENT', u'PLAYING', u'PLAYED'),
   (u'GOING', u'WENT', u'SAYING', u'SAID'),
   (u'GOING', u'WENT', u'TAKING', u'TOOK'),
   (u'PAYING', u'PAID', u'PLAYING', u'PLAYED'),
   (u'PAYING', u'PAID', u'SAYING', u'SAID'),
   (u'PAYING', u'PAID', u'TAKING', u'TOOK'),
   (u'PAYING', u'PAID', u'GOING', u'WENT'),
   (u'PLAYING', u'PLAYED', u'SAYING', u'SAID'),
   (u'PLAYING', u'PLAYED', u'TAKING', u'TOOK'),
   (u'PLAYING', u'PLAYED', u'GOING', u'WENT'),
   (u'PLAYING', u'PLAYED', u'PAYING', u'PAID'),
   (u'SAYING', u'SAID', u'TAKING', u'TOOK'),
   (u'SAYING', u'SAID', u'GOING', u'WENT'),
   (u'SAYING', u'SAID', u'PAYING', u'PAID'),
   (u'SAYING', u'SAID', u'PLAYING', u'PLAYED'),
   (u'TAKING', u'TOOK', u'GOING', u'WENT'),
   (u'TAKING', u'TOOK', u'PAYING', u'PAID'),
   (u'TAKING', u'TOOK', u'PLAYING', u'PLAYED'),
   (u'TAKING', u'TOOK', u'SAYING', u'SAID'),
   (u'BUILDING', u'BUILDINGS', u'CAR', u'CARS'),
   (u'BUILDING', u'BUILDINGS', u'CHILD', u'CHILDREN'),
   (u'BUILDING', u'BUILDINGS', u'MAN', u'MEN'),
   (u'CAR', u'CARS', u'CHILD', u'CHILDREN'),
   (u'CAR', u'CARS', u'MAN', u'MEN'),
   (u'CAR', u'CARS', u'BUILDING', u'BUILDINGS'),
   (u'CHILD', u'CHILDREN', u'MAN', u'MEN'),
   (u'CHILD', u'CHILDREN', u'BUILDING', u'BUILDINGS'),
   (u'CHILD', u'CHILDREN', u'CAR', u'CARS'),
   (u'MAN', u'MEN', u'BUILDING', u'BUILDINGS'),
   (u'MAN', u'MEN', u'CAR', u'CARS'),
   (u'MAN', u'MEN', u'CHILD', u'CHILDREN')],
  'section': 'total'}]

This accuracy takes an optional parameter restrict_vocab which limits which test examples are to be considered.

In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.

By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contains word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, 'coast' and 'shore' are very similar as they appear in the same context. At the same time 'clothes' and 'closet' are less similar because they are related but not interchangeable.


In [16]:
model.evaluate_word_pairs(test_data_dir + 'wordsim353.tsv')


2017-06-29 10:36:32,406 : INFO : Pearson correlation coefficient against /home/chinmaya/GSOC/Gensim/gensim/gensim/test/test_data/wordsim353.tsv: 0.1480
2017-06-29 10:36:32,407 : INFO : Spearman rank-order correlation coefficient against /home/chinmaya/GSOC/Gensim/gensim/gensim/test/test_data/wordsim353.tsv: 0.1820
2017-06-29 10:36:32,408 : INFO : Pairs with unknown words ratio: 85.6%
Out[16]:
((0.14802303571188502, 0.29991609812618542),
 SpearmanrResult(correlation=0.18196628619816893, pvalue=0.20125520403388769),
 85.55240793201133)

Once again, good performance on Google's or WS-353 test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.

Storing and loading models

You can store/load models using the standard gensim methods:


In [17]:
from tempfile import mkstemp

fs, temp_path = mkstemp("gensim_temp")  # creates a temp file

model.save(temp_path)  # save the model


2017-06-29 10:36:33,533 : INFO : saving Word2Vec object under /tmp/tmpNVnhzsgensim_temp, separately None
2017-06-29 10:36:33,535 : INFO : not storing attribute syn0norm
2017-06-29 10:36:33,535 : INFO : not storing attribute cum_table
2017-06-29 10:36:33,550 : INFO : saved /tmp/tmpNVnhzsgensim_temp

In [18]:
new_model = gensim.models.Word2Vec.load(temp_path)  # open the model


2017-06-29 10:36:34,033 : INFO : loading Word2Vec object from /tmp/tmpNVnhzsgensim_temp
2017-06-29 10:36:34,041 : INFO : loading wv recursively from /tmp/tmpNVnhzsgensim_temp.wv.* with mmap=None
2017-06-29 10:36:34,042 : INFO : setting ignored attribute syn0norm to None
2017-06-29 10:36:34,044 : INFO : setting ignored attribute cum_table to None
2017-06-29 10:36:34,048 : INFO : loaded /tmp/tmpNVnhzsgensim_temp

which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.

In addition, you can load models created by the original C tool, both using its text and binary formats:

  model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
  # using gzipped/bz2 input works too, no need to unzip:
  model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)

Online training / Resuming training

Advanced users can load a model and continue training it with more sentences and new vocabulary words:


In [19]:
model = gensim.models.Word2Vec.load(temp_path)
more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences']]
model.build_vocab(more_sentences, update=True)
model.train(more_sentences, total_examples=model.corpus_count, epochs=model.iter)

# cleaning up temp
os.close(fs)
os.remove(temp_path)


2017-06-29 10:36:35,557 : INFO : loading Word2Vec object from /tmp/tmpNVnhzsgensim_temp
2017-06-29 10:36:35,566 : INFO : loading wv recursively from /tmp/tmpNVnhzsgensim_temp.wv.* with mmap=None
2017-06-29 10:36:35,567 : INFO : setting ignored attribute syn0norm to None
2017-06-29 10:36:35,572 : INFO : setting ignored attribute cum_table to None
2017-06-29 10:36:35,573 : INFO : loaded /tmp/tmpNVnhzsgensim_temp
2017-06-29 10:36:35,581 : INFO : collecting all words and their counts
2017-06-29 10:36:35,582 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-06-29 10:36:35,586 : INFO : collected 13 word types from a corpus of 13 raw words and 1 sentences
2017-06-29 10:36:35,587 : INFO : Updating model with new vocabulary
2017-06-29 10:36:35,589 : INFO : New added 0 unique words (0% of original 13)
                        and increased the count of 0 pre-existing words (0% of original 13)
2017-06-29 10:36:35,590 : INFO : deleting the raw counts dictionary of 13 items
2017-06-29 10:36:35,591 : INFO : sample=0.001 downsamples 0 most-common words
2017-06-29 10:36:35,592 : INFO : downsampling leaves estimated 0 word corpus (0.0% of prior 0)
2017-06-29 10:36:35,594 : INFO : estimated required memory for 1723 words and 100 dimensions: 2239900 bytes
2017-06-29 10:36:35,599 : INFO : updating layer weights
2017-06-29 10:36:35,601 : INFO : training model with 4 workers on 1723 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-06-29 10:36:35,606 : INFO : worker thread finished; awaiting finish of 3 more threads
2017-06-29 10:36:35,607 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-06-29 10:36:35,609 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-06-29 10:36:35,613 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-06-29 10:36:35,614 : INFO : training on 65 raw words (28 effective words) took 0.0s, 3413 effective words/s
2017-06-29 10:36:35,615 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay

You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.

Note that it’s not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.

Using the model

Word2Vec supports several word similarity tasks out of the box:


In [20]:
model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1)


2017-06-29 10:36:36,558 : INFO : precomputing L2-norms of word weight vectors
Out[20]:
[('longer', 0.9916001558303833)]

In [21]:
model.doesnt_match("input is lunch he sentence cat".split())


2017-06-29 10:36:37,587 : WARNING : vectors for words set(['lunch', 'input', 'cat']) are not present in the model, ignoring these words
Out[21]:
'sentence'

In [22]:
print(model.similarity('human', 'party'))
print(model.similarity('tree', 'murder'))


0.99914980326
0.995203599363

You can get the probability distribution for the center word given the context words as input:


In [23]:
print(model.predict_output_word(['emergency', 'beacon', 'received']))


[('more', 0.0010031241), ('can', 0.00092728506), ('continue', 0.00087752187), ('training', 0.00080867682), ('there', 0.00075810013), ('three', 0.00075742434), ('australia', 0.00075092859), ('government', 0.0007505166), ('their', 0.00074284436), ('us', 0.00073618308)]

The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.

If you need the raw output vectors in your application, you can access these either on a word-by-word basis:


In [24]:
model['tree']  # raw NumPy vector of a word


Out[24]:
array([ 0.00313283,  0.02270725, -0.0271953 , -0.00758541, -0.03084832,
       -0.03540374,  0.00492509, -0.08605313, -0.00946797, -0.01907749,
        0.0190183 , -0.02690557, -0.0665649 , -0.01858037, -0.02188841,
        0.04523174, -0.03034727, -0.00294408,  0.02323552, -0.02585541,
       -0.06424622, -0.00719615,  0.06513051,  0.04760417,  0.02894664,
        0.01104737, -0.00513022,  0.01997521, -0.0142725 ,  0.00513998,
        0.00099208,  0.07234117,  0.01262072, -0.00151589,  0.02293177,
        0.02008283, -0.03751098,  0.01848946,  0.01339256, -0.06704903,
       -0.07547309,  0.00120684, -0.04351105, -0.02421799,  0.03180883,
       -0.05489816,  0.03536329,  0.00560333, -0.01004709,  0.04278436,
        0.01327703,  0.00862974, -0.03693489,  0.01097009,  0.01643447,
       -0.01702741,  0.05618335, -0.03153885,  0.02427759, -0.03469121,
       -0.03066109, -0.05092006,  0.01682321, -0.03714861, -0.00944941,
       -0.06158038, -0.08220627,  0.03865834,  0.05205975, -0.00297223,
       -0.00764436,  0.00625849, -0.04550754,  0.01111075, -0.04805974,
       -0.05271595,  0.03614397, -0.01115665,  0.04689607,  0.04113698,
       -0.01792447,  0.03566624, -0.01676619,  0.00202644,  0.01494004,
        0.00792563, -0.08858279, -0.08187189,  0.0634894 , -0.02159132,
       -0.05633228, -0.04459627, -0.04756012, -0.0024067 , -0.02443214,
       -0.02618414, -0.0249261 ,  0.02130016, -0.05084078,  0.00092178], dtype=float32)

…or en-masse as a 2D NumPy matrix from model.wv.syn0.

Training Loss Computation

The parameter compute_loss can be used to toggle computation of loss while training the Word2Vec model. The computed loss is stored in the model attribute running_training_loss and can be retrieved using the function get_latest_training_loss as follows :


In [25]:
# instantiating and training the Word2Vec model
model_with_loss = gensim.models.Word2Vec(sentences, min_count=1, compute_loss=True, hs=0, sg=1, seed=42)

# getting the training loss value
training_loss = model_with_loss.get_latest_training_loss()
print(training_loss)


2017-06-29 10:36:42,297 : INFO : collecting all words and their counts
2017-06-29 10:36:42,300 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-06-29 10:36:42,333 : INFO : collected 10186 word types from a corpus of 59890 raw words and 300 sentences
2017-06-29 10:36:42,335 : INFO : Loading a fresh vocabulary
2017-06-29 10:36:42,372 : INFO : min_count=1 retains 10186 unique words (100% of original 10186, drops 0)
2017-06-29 10:36:42,372 : INFO : min_count=1 leaves 59890 word corpus (100% of original 59890, drops 0)
2017-06-29 10:36:42,406 : INFO : deleting the raw counts dictionary of 10186 items
2017-06-29 10:36:42,407 : INFO : sample=0.001 downsamples 37 most-common words
2017-06-29 10:36:42,408 : INFO : downsampling leaves estimated 47231 word corpus (78.9% of prior 59890)
2017-06-29 10:36:42,409 : INFO : estimated required memory for 10186 words and 100 dimensions: 13241800 bytes
2017-06-29 10:36:42,452 : INFO : resetting layer weights
2017-06-29 10:36:42,568 : INFO : training model with 3 workers on 10186 vocabulary and 100 features, using sg=1 hs=0 sample=0.001 negative=5 window=5
2017-06-29 10:36:43,357 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-06-29 10:36:43,369 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-06-29 10:36:43,392 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-06-29 10:36:43,393 : INFO : training on 299450 raw words (235935 effective words) took 0.8s, 286433 effective words/s
1348399.25

Benchmarks to see effect of training loss compuation code on training time

We first download and setup the test data used for getting the benchmarks.


In [33]:
input_data_files = []

def setup_input_data():
    # check if test data already present
    if os.path.isfile('./text8') is False:

        # download and decompress 'text8' corpus
        import zipfile
        ! wget 'http://mattmahoney.net/dc/text8.zip'
        ! unzip 'text8.zip'
    
        # create 1 MB, 10 MB and 50 MB files
        ! head -c1000000 text8 > text8_1000000
        ! head -c10000000 text8 > text8_10000000
        ! head -c50000000 text8 > text8_50000000
                
    # add 25 KB test file
    input_data_files.append(os.path.join(os.getcwd(), '../../gensim/test/test_data/lee_background.cor'))

    # add 1 MB test file
    input_data_files.append(os.path.join(os.getcwd(), 'text8_1000000'))

    # add 10 MB test file
    input_data_files.append(os.path.join(os.getcwd(), 'text8_10000000'))

    # add 50 MB test file
    input_data_files.append(os.path.join(os.getcwd(), 'text8_50000000'))

    # add 100 MB test file
    input_data_files.append(os.path.join(os.getcwd(), 'text8'))

setup_input_data()
print(input_data_files)


['/home/chinmaya/GSOC/Gensim/gensim/docs/notebooks/../../gensim/test/test_data/lee_background.cor', '/home/chinmaya/GSOC/Gensim/gensim/docs/notebooks/text8_1000000', '/home/chinmaya/GSOC/Gensim/gensim/docs/notebooks/text8_10000000', '/home/chinmaya/GSOC/Gensim/gensim/docs/notebooks/text8_50000000', '/home/chinmaya/GSOC/Gensim/gensim/docs/notebooks/text8']

We now compare the training time taken for different combinations of input data and model training parameters like hs and sg.


In [34]:
# using 25 KB and 50 MB files only for generating o/p -> comment next line for using all 5 test files
input_data_files = [input_data_files[0], input_data_files[-2]]
print(input_data_files)

import time
import numpy as np
import pandas as pd

train_time_values = []
seed_val = 42
sg_values = [0, 1]
hs_values = [0, 1]

for data_file in input_data_files:
    data = gensim.models.word2vec.LineSentence(data_file) 
    for sg_val in sg_values:
        for hs_val in hs_values:
            for loss_flag in [True, False]:
                time_taken_list = []
                for i in range(3):
                    start_time = time.time()
                    w2v_model = gensim.models.Word2Vec(data, compute_loss=loss_flag, sg=sg_val, hs=hs_val, seed=seed_val) 
                    time_taken_list.append(time.time() - start_time)

                time_taken_list = np.array(time_taken_list)
                time_mean = np.mean(time_taken_list)
                time_std = np.std(time_taken_list)
                train_time_values.append({'train_data': data_file, 'compute_loss': loss_flag, 'sg': sg_val, 'hs': hs_val, 'mean': time_mean, 'std': time_std})

train_times_table = pd.DataFrame(train_time_values)
train_times_table = train_times_table.sort_values(by=['train_data', 'sg', 'hs', 'compute_loss'], ascending=[False, False, True, False])
print(train_times_table)


['/home/chinmaya/GSOC/Gensim/gensim/docs/notebooks/../../gensim/test/test_data/lee_background.cor', '/home/chinmaya/GSOC/Gensim/gensim/docs/notebooks/text8_50000000']
    compute_loss  hs        mean  sg       std  \
12          True   0  125.242767   1  0.442522   
13         False   0  124.090732   1  0.432895   
14          True   1  252.800164   1  1.140344   
15         False   1  245.065643   1  2.657392   
8           True   0   43.812430   0  1.216697   
9          False   0   42.815214   0  0.142814   
10          True   1   74.801153   0  0.300728   
11         False   1   74.236441   0  0.126426   
4           True   0    0.560387   1  0.005805   
5          False   0    0.687179   1  0.143629   
6           True   1    1.126855   1  0.004407   
7          False   1    1.135358   1  0.059161   
0           True   0    0.316948   0  0.005148   
1          False   0    0.319236   0  0.005792   
2           True   1    0.429879   0  0.005373   
3          False   1    0.429489   0  0.000756   

                                           train_data  
12  /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
13  /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
14  /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
15  /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
8   /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
9   /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
10  /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
11  /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
4   /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
5   /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
6   /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
7   /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
0   /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
1   /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
2   /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  
3   /home/chinmaya/GSOC/Gensim/gensim/docs/noteboo...  

Conclusion

In this tutorial we learned how to train word2vec models on your custom data and also how to evaluate it. Hope that you too will find this popular tool useful in your Machine Learning tasks!

Full word2vec API docs here; get gensim here. Original C toolkit and word2vec papers by Google here.


In [ ]: