This is an experimental code developed by Tomas Mikolov found in the word2vec google group: https://groups.google.com/d/msg/word2vec-toolkit/Q49FIrNOQRo/J6KG8mUj45sJ
This is not yet available on Pypi you need the latest master branch from git.
The input format for doc2vec
is still one big text document but every line should be one document prepended with an unique id, for example:
_*0 This is sentence 1
_*1 This is sentence 2
nltk
tar -xvf aclImdb_v1.tar.gz
Merge data into one big document with an id per line and do some basic preprocessing: word tokenizer.
In [1]:
from __future__ import unicode_literals
In [2]:
import os
import nltk
In [3]:
directories = ['train/pos', 'train/neg', 'test/pos', 'test/neg', 'train/unsup']
In [4]:
input_file = open('/Users/drodriguez/Downloads/alldata.txt', 'w')
In [5]:
id_ = 0
for directory in directories:
rootdir = os.path.join('/Users/drodriguez/Downloads/aclImdb', directory)
for subdir, dirs, files in os.walk(rootdir):
for file_ in files:
with open(os.path.join(subdir, file_), 'r') as f:
doc_id = '_*%i' % id_
id_ = id_ + 1
text = f.read()
text = text.decode('utf-8')
tokens = nltk.word_tokenize(text)
doc = ' '.join(tokens).lower()
doc = doc.encode('ascii', 'ignore')
input_file.write('%s %s\n' % (doc_id, doc))
In [6]:
input_file.close()
In [1]:
%load_ext autoreload
%autoreload 2
In [2]:
import word2vec
In [3]:
word2vec.doc2vec('/Users/drodriguez/Downloads/alldata.txt', '/Users/drodriguez/Downloads/vectors.bin', cbow=0, size=100, window=10, negative=5, hs=0, sample='1e-4', threads=12, iter_=20, min_count=1, verbose=True)
Is possible to load the vectors using the same wordvectors class as a regular word2vec binary file.
In [1]:
%load_ext autoreload
%autoreload 2
In [2]:
import word2vec
In [3]:
model = word2vec.load('/Users/drodriguez/Downloads/vectors.bin')
In [5]:
model.vectors.shape
Out[5]:
The documents vector are going to be identified by the id we used in the preprocesing section, for example document 1 is going to have vector:
In [7]:
model['_*1']
Out[7]:
We can ask for similarity words or documents on document 1
In [10]:
indexes, metrics = model.cosine('_*1')
In [11]:
model.generate_response(indexes, metrics).tolist()
Out[11]:
Now its just a case of matching the id to the data created on the preprocessing step
In [ ]: