In case you missed the buzz, word2vec is a widely featured as a member of the “new wave” of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs”.
Word2vec is very useful in automatic text tagging, recommender systems and machine translation.
Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec
on the Google News dataset, of about 100 billion words.
In this tutorial you will learn how to train and evaluate word2vec models on your business data.
In [1]:
# import modules & set up logging
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
In [2]:
sentences = [['first', 'sentence'], ['second', 'sentence']]
# train word2vec on the two sentences
model = gensim.models.Word2Vec(sentences, min_count=1)
Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentence…
For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line:
In [3]:
# create some toy data to use with the following example
import smart_open, os
if not os.path.exists('./data/'):
os.makedirs('./data/')
filenames = ['./data/f1.txt', './data/f2.txt']
for i, fname in enumerate(filenames):
with smart_open.smart_open(fname, 'w') as fout:
for line in sentences[i]:
fout.write(line + '\n')
In [4]:
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
for line in open(os.path.join(self.dirname, fname)):
yield line.split()
In [5]:
sentences = MySentences('./data/') # a memory-friendly iterator
print(list(sentences))
In [6]:
# generate the Word2Vec model
model = gensim.models.Word2Vec(sentences, min_count=1)
In [7]:
print(model)
print(model.wv.vocab)
Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences
iterator and word2vec
doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users: calling Word2Vec(sentences, iter=1)
will run two passes over the sentences iterator. In general it runs iter+1
passes. By the way, the default value is iter=5
to comply with Google's word2vec in C language.
These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and you’re able to initialize the vocabulary some other way:
In [8]:
# build the same model, making the 2 steps explicit
new_model = gensim.models.Word2Vec(min_count=1) # an empty model, no training
new_model.build_vocab(sentences) # can be a non-repeatable, 1-pass generator
new_model.train(sentences) # can be a non-repeatable, 1-pass generator
Out[8]:
In [9]:
print(new_model)
print(model.wv.vocab)
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim):
In [10]:
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep
lee_train_file = test_data_dir + 'lee_background.cor'
In [11]:
class MyText(object):
def __iter__(self):
for line in open(lee_train_file):
# assume there's one document per line, tokens separated by whitespace
yield line.lower().split()
sentences = MyText()
print(sentences)
Word2Vec
accepts several parameters that affect both training speed and quality.
One of them is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them:
In [12]:
# default value of min_count=5
model = gensim.models.Word2Vec(sentences, min_count=10)
In [13]:
# default value of size=100
model = gensim.models.Word2Vec(sentences, size=200)
Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
The last of the major parameters (full list here) is for training parallelization, to speed up training:
In [14]:
# default value of workers=3 (tutorial says 1...)
model = gensim.models.Word2Vec(sentences, workers=4)
The workers
parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec
training will be miserably slow).
At its core, word2vec
model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200
, the model will require approx. 100,000*200*4*3 bytes = ~229MB
.
There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Word2Vec
training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application.
Google have released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder.
For example a syntactic analogy of comparative type is bad:worse;good:?. There are total of 9 types of syntactic comparisons in the dataset like plural nouns and nouns of opposite meaning.
The semantic questions contain five types of semantic analogies, such as capital cities (Paris:France;Tokyo:?) or family members (brother:sister;dad:?).
Gensim support the same evaluation set, in exactly the same format:
In [15]:
model.accuracy('./datasets/questions-words.txt')
Out[15]:
This accuracy
takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contain word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, coast and shore are very similar as they appear in the same context. At the same time clothes and closet are less similar because they are related but not interchangeable.
In [16]:
model.evaluate_word_pairs(test_data_dir +'wordsim353.tsv')
Out[16]:
Once again, good performance on Google's or WS-353 test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.
In [17]:
from tempfile import mkstemp
fs, temp_path = mkstemp("gensim_temp") # creates a temp file
model.save(temp_path) # save the model
In [18]:
new_model = gensim.models.Word2Vec.load(temp_path) # open the model
which uses pickle internally, optionally mmap
‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats:
model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
# using gzipped/bz2 input works too, no need to unzip:
model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)
Advanced users can load a model and continue training it with more sentences and new vocabulary words:
In [19]:
model = gensim.models.Word2Vec.load(temp_path)
more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue',
'training', 'it', 'with', 'more', 'sentences']]
model.build_vocab(more_sentences, update=True)
model.train(more_sentences, )
# cleaning up temp
os.close(fs)
os.remove(temp_path)
You may need to tweak the total_words
parameter to train()
, depending on what learning rate decay you want to simulate.
Note that it’s not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format()
. You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Word2Vec
supports several word similarity tasks out of the box:
In [20]:
model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1)
Out[20]:
In [21]:
model.doesnt_match("input is lunch he sentence cat".split())
Out[21]:
In [22]:
print(model.similarity('human', 'party'))
print(model.similarity('tree', 'murder'))
You can get the probability distribution for the center word given the context words as input:
In [23]:
print(model.predict_output_word(['emergency','beacon','received']))
The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.
If you need the raw output vectors in your application, you can access these either on a word-by-word basis:
In [24]:
model['tree'] # raw NumPy vector of a word
Out[24]:
…or en-masse as a 2D NumPy matrix from model.wv.syn0
.
In this tutorial we learned how to train word2vec models on your custom data and also how to evaluate it. Hope that you too will find this popular tool useful in your Machine Learning tasks!
Full word2vec
API docs here; get gensim here. Original C toolkit and word2vec
papers by Google here.
In [ ]: