WordRank wrapper tutorial on Lee Corpus

WordRank is a new word embedding algorithm which captures the semantic similarities in a text data well. See this notebook for it's comparisons to other popular embedding models. This tutorial will serve as a guide to use the WordRank wrapper in gensim. You need to install WordRank before proceeding with this tutorial.

Train model

We'll use Lee corpus for training which is already available in gensim. Now for Wordrank, two parameters dump_period and iter needs to be in sync as it dumps the embedding file with the start of next iteration. For example, if you want results after 10 iterations, you need to use iter=11 and dump_period can be anything that gives mod 0 with resulting iteration, in this case 2 or 5.


In [1]:
from gensim.models.wrappers import Wordrank

wr_path = 'wordrank' # path to Wordrank directory
out_dir = 'model' # name of output directory to save data to
data = '../../gensim/test/test_data/lee.cor' # sample corpus

model = Wordrank.train(wr_path, data, out_dir, iter=11, dump_period=5)


---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-1-a9079683a958> in <module>
      5 data = '../../gensim/test/test_data/lee.cor' # sample corpus
      6 
----> 7 model = Wordrank.train(wr_path, data, out_dir, iter=11, dump_period=5)

~/git/gensim/gensim/models/wrappers/wordrank.py in train(cls, wr_path, corpus_file, out_name, size, window, symmetric, min_count, max_vocab_size, sgd_num, lrate, period, iter, epsilon, dump_period, reg, alpha, beta, loss, memory, np, cleanup_files, sorted_vocab, ensemble)
    177             with smart_open(input_fname, 'rb') as r:
    178                 with smart_open(output_fname, 'wb') as w:
--> 179                     utils.check_output(w, args=command, stdin=r)
    180 
    181         logger.info("Deleting frequencies from vocab file")

~/git/gensim/gensim/utils.py in check_output(stdout, *popenargs, **kwargs)
   1907     try:
   1908         logger.debug("COMMAND: %s %s", popenargs, kwargs)
-> 1909         process = subprocess.Popen(stdout=stdout, *popenargs, **kwargs)
   1910         output, unused_err = process.communicate()
   1911         retcode = process.poll()

/usr/lib/python3.7/subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors, text)
    773                                 c2pread, c2pwrite,
    774                                 errread, errwrite,
--> 775                                 restore_signals, start_new_session)
    776         except:
    777             # Cleanup if the child failed starting.

/usr/lib/python3.7/subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, start_new_session)
   1520                         if errno_num == errno.ENOENT:
   1521                             err_msg += ': ' + repr(err_filename)
-> 1522                     raise child_exception_type(errno_num, err_msg, err_filename)
   1523                 raise child_exception_type(err_msg)
   1524 

FileNotFoundError: [Errno 2] No such file or directory: 'wordrank/glove/vocab_count': 'wordrank/glove/vocab_count'

Now, you can use any of the Keyed Vector function in gensim, on this model for further tasks. For example,


In [ ]:
model.most_similar('President')

In [ ]:
model.similarity('President', 'military')

As Wordrank provides two sets of embeddings, the word and context embedding, you can obtain their addition by setting ensemble parameter to 1 in the train method.

Save and Load models

In case, you have trained the model yourself using demo scripts in Wordrank, you can then simply load the embedding files in gensim.

Also, Wordrank doesn't return the embeddings sorted according to the word frequency in corpus, so you can use the sorted_vocab parameter in the load method. But for that, you need to provide the vocabulary file generated in the 'matrix.toy' directory(if you used default names in demo) where all the metadata is stored.


In [ ]:
wr_word_embedding = 'wordrank.words'
vocab_file = 'vocab.txt'

model = Wordrank.load_wordrank_model(wr_word_embedding, vocab_file, sorted_vocab=1)

If you want to load the ensemble embedding, you similarly need to provide the context embedding file and set ensemble to 1 in load_wordrank_model method.


In [ ]:
wr_context_file = 'wordrank.contexts'
model = Wordrank.load_wordrank_model(wr_word_embedding, vocab_file, wr_context_file, sorted_vocab=1, ensemble=1)

You can save these sorted embeddings using the standard gensim methods.


In [ ]:
from tempfile import mkstemp

fs, temp_path = mkstemp("gensim_temp")  # creates a temp file
model.save(temp_path)  # save the model

Evaluating models

Now that the embeddings are loaded in Word2Vec format and sorted according to the word frequencies in corpus, you can use the evaluations provided by gensim on this model.

For example, it can be evaluated on following Word Analogies and Word Similarity benchmarks.


In [ ]:
word_analogies_file = 'datasets/questions-words.txt'
model.accuracy(word_analogies_file)

In [ ]:
word_similarity_file = 'datasets/ws-353.txt'
model.evaluate_word_pairs(word_similarity_file)

These methods take an optional parameter restrict_vocab which limits which test examples are to be considered.

The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.

Conclusion

We learned to use Wordrank wrapper on a sample corpus and also how to directly load the Wordrank embedding files in gensim. Once loaded, you can use the standard gensim methods on this embedding.