Tranlation Matrix Tutorial

What is it ?

Suppose we are given a set of word pairs and their associated vector representaion $\{x_{i},z_{i}\}_{i=1}^{n}$, where $x_{i} \in R^{d_{1}}$ is the distibuted representation of word $i$ in the source language, and ${z_{i} \in R^{d_{2}}}$ is the vector representation of its translation. Our goal is to find a transformation matrix $W$ such that $Wx_{i}$ approximates $z_{i}$. In practice, $W$ can be learned by the following optimization prolem:

$\min \limits_{W} \sum \limits_{i=1}^{n} ||Wx_{i}-z_{i}||^{2}$

Resources

Tomas Mikolov, Quoc V Le, Ilya Sutskever. 2013.Exploiting Similarities among Languages for Machine Translation

Georgiana Dinu, Angelikie Lazaridou and Marco Baroni. 2014.Improving zero-shot learning by mitigating the hubness problem


In [1]:
import os

from gensim import utils
from gensim.models import translation_matrix
from gensim.models import KeyedVectors

For this tutorial, we'll train our model using the English -> Italian word pairs from the OPUS collection. This corpus contains 5000 word pairs. Each word pair is English word with corresponding Italian word.

Dataset download:

OPUS_en_it_europarl_train_5K.txt


In [4]:
!rm 1nuIuQoT

In [2]:
train_file = "OPUS_en_it_europarl_train_5K.txt"

with utils.smart_open(train_file, "r") as f:
    word_pair = [tuple(utils.to_unicode(line).strip().split()) for line in f]
print (word_pair[:10])


---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-2-a21913f3bec7> in <module>
      1 train_file = "OPUS_en_it_europarl_train_5K.txt"
      2 
----> 3 with utils.smart_open(train_file, "r") as f:
      4     word_pair = [tuple(utils.to_unicode(line).strip().split()) for line in f]
      5 print (word_pair[:10])

~/envs/gensim/lib/python3.7/site-packages/smart_open/smart_open_lib.py in smart_open(uri, mode, **kw)
    437             transport_params[key] = value
    438 
--> 439     return open(uri, mode, ignore_ext=ignore_extension, transport_params=transport_params, **scrubbed_kwargs)
    440 
    441 

~/envs/gensim/lib/python3.7/site-packages/smart_open/smart_open_lib.py in open(uri, mode, buffering, encoding, errors, newline, closefd, opener, ignore_ext, transport_params)
    305         buffering=buffering,
    306         encoding=encoding,
--> 307         errors=errors,
    308     )
    309     if fobj is not None:

~/envs/gensim/lib/python3.7/site-packages/smart_open/smart_open_lib.py in _shortcut_open(uri, mode, ignore_ext, buffering, encoding, errors)
    496     #
    497     if six.PY3:
--> 498         return _builtin_open(parsed_uri.uri_path, mode, buffering=buffering, **open_kwargs)
    499     elif not open_kwargs:
    500         return _builtin_open(parsed_uri.uri_path, mode, buffering=buffering)

FileNotFoundError: [Errno 2] No such file or directory: 'OPUS_en_it_europarl_train_5K.txt'

This tutorial uses 300-dimensional vectors of English words as source and vectors of Italian words as target. (Those vector trained by the word2vec toolkit with cbow. The context window was set 5 words to either side of the target, the sub-sampling option was set to 1e-05 and estimate the probability of a target word with the negative sampling method, drawing 10 samples from the noise distribution)

Download dataset:

EN.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt

IT.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt


In [ ]:
# Load the source language word vector
source_word_vec_file = "EN.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt"
source_word_vec = KeyedVectors.load_word2vec_format(source_word_vec_file, binary=False)

In [ ]:
# Load the target language word vector
target_word_vec_file = "IT.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt"
target_word_vec = KeyedVectors.load_word2vec_format(target_word_vec_file, binary=False)

Train the translation matrix


In [ ]:
transmat = translation_matrix.TranslationMatrix(source_word_vec, target_word_vec, word_pair)
transmat.train(word_pair)
print ("the shape of translation matrix is: ", transmat.translation_matrix.shape)

Prediction Time: For any given new word, we can map it to the other language space by coputing $z = Wx$, then we find the word whose representation is closet to z in the target language space, using consine similarity as the distance metric.

Part one:

Let's look at some vocabulary of numbers translation. We use English words (one, two, three, four and five) as test.


In [ ]:
# The pair is in the form of (English, Italian), we can see whether the translated word is correct
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5, )

In [ ]:
for k, v in translated_word.iteritems():
    print ("word ", k, " and translated word", v)

Part two:

Let's look at some vocabulary of fruits translation. We use English words (apple, orange, grape, banana and mango) as test.


In [ ]:
words = [("apple", "mela"), ("orange", "arancione"), ("grape", "acino"), ("banana", "banana"), ("mango", "mango")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
    print ("word ", k, " and translated word", v)

Part three:

Let's look at some vocabulary of animals translation. We use English words (dog, pig, cat, horse and bird) as test.


In [ ]:
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("fish", "cavallo"), ("birds", "uccelli")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
    print ("word ", k, " and translated word", v)

The Creation Time for the Translation Matrix

Testing the creation time, we extracted more word pairs from a dictionary built from Europarl(Europara, en-it). We obtain about 20K word pairs and their coresponding word vectors or you can download from this.word_dict.pkl


In [ ]:
import pickle
word_dict = "word_dict.pkl"
with utils.smart_open(word_dict, "r") as f:
    word_pair = pickle.load(f)
print ("the length of word pair ", len(word_pair))

In [ ]:
import time

test_case = 10
word_pair_length = len(word_pair)
step = word_pair_length / test_case

duration = []
sizeofword = []

for idx in range(0, test_case):
    sub_pair = word_pair[: (idx + 1) * step]

    startTime = time.time()
    transmat = translation_matrix.TranslationMatrix(source_word_vec, target_word_vec, sub_pair)
    transmat.train(sub_pair)
    endTime = time.time()
    
    sizeofword.append(len(sub_pair))
    duration.append(endTime - startTime)

In [ ]:
import plotly
from plotly.graph_objs import Scatter, Layout

plotly.offline.init_notebook_mode(connected=True)

plotly.offline.iplot({
    "data": [Scatter(x=sizeofword, y=duration)],
    "layout": Layout(title="time for creation"),
}, filename="tm_creation_time.html")

You will see a two dimensional coordination whose horizontal axis is the size of corpus and vertical axis is the time to train a translation matrix (the unit is second). As the size of corpus increases, the time increases linearly.

Linear Relationship Between Languages

To have a better understanding of the principles behind, we visualized the word vectors using PCA, we noticed that the vector representations of similar words in different languages were related by a linear transformation.


In [ ]:
from sklearn.decomposition import PCA

import plotly
from plotly.graph_objs import Scatter, Layout, Figure
plotly.offline.init_notebook_mode(connected=True)

In [ ]:
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]

en_words, it_words = zip(*words)

pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)

# remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
#     plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))

# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
#     plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# plt.show()

In [ ]:
# you can also using plotly lib to plot in one figure
trace1 = Scatter(
    x = new_en_words_vec[:, 0],
    y = new_en_words_vec[:, 1],
    mode = 'markers+text',
    text = en_words,
    textposition = 'top'
)
trace2 = Scatter(
    x = new_it_words_vec[:, 0],
    y = new_it_words_vec[:, 1],
    mode = 'markers+text',
    text = it_words,
    textposition = 'top'
)
layout = Layout(
    showlegend = False
)
data = [trace1, trace2]

fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_number.html')

The figure shows that the word vectors for English number one to five and the corresponding Italian words uno to cinque have similar geometric arrangements. So the relationship between vector spaces that represent these two languages can be captured by linear mapping. If we know the translation of one to four from English to Italian, we can learn the transformation matrix that can help us to translate five or other numbers to the Italian word.


In [ ]:
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
en_words, it_words = zip(*words)
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]

# Translate the English word five to Italian word
translated_word = transmat.translate([en_words[4]], 3)
print "translation of five: ", translated_word

# the translated words of five
for item in translated_word[en_words[4]]:
    it_words_vec.append(target_word_vec[item])

pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)

# remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
#     plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))

# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
#     plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# # annote for the translation of five, the red text annotation is the translation of five
# for idx, item in enumerate(translated_word[en_words[4]]):
#     plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),
#                  xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),
#                  color="red",
#                  arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)
# plt.show()

In [ ]:
trace1 = Scatter(
    x = new_en_words_vec[:, 0],
    y = new_en_words_vec[:, 1],
    mode = 'markers+text',
    text = en_words,
    textposition = 'top'
)
trace2 = Scatter(
    x = new_it_words_vec[:, 0],
    y = new_it_words_vec[:, 1],
    mode = 'markers+text',
    text = it_words,
    textposition = 'top'
)
layout = Layout(
    showlegend = False,
    annotations = [dict(
        x = new_it_words_vec[5][0],
        y = new_it_words_vec[5][1],
        text = translated_word[en_words[4]][0],
        arrowcolor = "black",
        arrowsize = 1.5,
        arrowwidth = 1,
        arrowhead = 0.5
      ), dict(
        x = new_it_words_vec[6][0],
        y = new_it_words_vec[6][1],
        text = translated_word[en_words[4]][1],
        arrowcolor = "black",
        arrowsize = 1.5,
        arrowwidth = 1,
        arrowhead = 0.5
      ), dict(
        x = new_it_words_vec[7][0],
        y = new_it_words_vec[7][1],
        text = translated_word[en_words[4]][2],
        arrowcolor = "black",
        arrowsize = 1.5,
        arrowwidth = 1,
        arrowhead = 0.5
      )]
)
data = [trace1, trace2]

fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_numbers.html')

You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word five, we return top 3 similar words [u'cinque', u'quattro', u'tre']. We can easily see that the translation is convincing.

Let's see some animal words, the figue shows that most of words are also share the similar geometric arrangements.


In [ ]:
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("horse", "cavallo"), ("birds", "uccelli")]
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]

en_words, it_words = zip(*words)

# remove the code, use the plotly for ploting instead
# pca = PCA(n_components=2)
# new_en_words_vec = pca.fit_transform(en_words_vec)
# new_it_words_vec = pca.fit_transform(it_words_vec)

# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
#     plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))

# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
#     plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# plt.show()

In [ ]:
trace1 = Scatter(
    x = new_en_words_vec[:, 0],
    y = new_en_words_vec[:, 1],
    mode = 'markers+text',
    text = en_words,
    textposition = 'top'
)
trace2 = Scatter(
    x = new_it_words_vec[:, 0],
    y = new_it_words_vec[:, 1],
    mode = 'markers+text',
    text = it_words,
    textposition ='top'
)
layout = Layout(
    showlegend = False
)
data = [trace1, trace2]

fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')

In [ ]:
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("horse", "cavallo"), ("birds", "uccelli")]
en_words, it_words = zip(*words)
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]

# Translate the English word birds to Italian word
translated_word = transmat.translate([en_words[4]], 3)
print "translation of birds: ", translated_word

# the translated words of birds
for item in translated_word[en_words[4]]:
    it_words_vec.append(target_word_vec[item])

pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)

# # remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
#     plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))

# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
#     plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# # annote for the translation of five, the red text annotation is the translation of five
# for idx, item in enumerate(translated_word[en_words[4]]):
#     plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),
#                  xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),
#                  color="red",
#                  arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)
# plt.show()

In [ ]:
trace1 = Scatter(
    x = new_en_words_vec[:, 0],
    y = new_en_words_vec[:, 1],
    mode = 'markers+text',
    text = en_words,
    textposition = 'top'
)
trace2 = Scatter(
    x = new_it_words_vec[:5, 0],
    y = new_it_words_vec[:5, 1],
    mode = 'markers+text',
    text = it_words[:5],
    textposition = 'top'
)
layout = Layout(
    showlegend = False,
    annotations = [dict(
        x = new_it_words_vec[5][0],
        y = new_it_words_vec[5][1],
        text = translated_word[en_words[4]][0],
        arrowcolor = "black",
        arrowsize = 1.5,
        arrowwidth = 1,
        arrowhead = 0.5
      ), dict(
        x = new_it_words_vec[6][0],
        y = new_it_words_vec[6][1],
        text = translated_word[en_words[4]][1],
        arrowcolor = "black",
        arrowsize = 1.5,
        arrowwidth = 1,
        arrowhead = 0.5
      ), dict(
        x = new_it_words_vec[7][0],
        y = new_it_words_vec[7][1],
        text = translated_word[en_words[4]][2],
        arrowcolor = "black",
        arrowsize = 1.5,
        arrowwidth = 1,
        arrowhead = 0.5
      )]
)
data = [trace1, trace2]

fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')

You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word birds, we return top 3 similar words [u'uccelli', u'garzette', u'iguane']. We can easily see that the animals' words translation is also convincing as the numbers.

Tranlation Matrix Revisit

Warning: this part is unstable/experimental, it requires more experimentation and will change soon!

As dicussion in this PR, Translation Matrix not only can used to translate the words from one source language to another target lanuage, but also to translate new document vectors back to old model space.

For example, if we have trained 15k documents using doc2vec (we called this as model1), and we are going to train new 35k documents using doc2vec(we called this as model2). So we can include those 15k documents as reference documents into the new 35k documents. Then we can get 15k document vectors from model1 and 50k document vectors from model2, but both of the two models have vectors for those 15k documents. We can use those vectors to build a mapping from model1 to model2. Finally, with this relation, we can back-mapping the model2's vector to model1. Therefore, 35k document vectors are learned using this method.

In this notebook, we use the IMDB dataset as example. For more information about this dataset, please refer to this. And some of code are borrowed from this notebook


In [ ]:
import gensim
from gensim.models.doc2vec import TaggedDocument
from gensim.models import Doc2Vec
from collections import namedtuple
from gensim import utils

def read_sentimentDocs():
    SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')

    alldocs = []  # will hold all docs in original order
    with utils.smart_open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:
        for line_no, line in enumerate(alldata):
            tokens = gensim.utils.to_unicode(line).split()
            words = tokens[1:]
            tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost
            split = ['train','test','extra','extra'][line_no // 25000]  # 25k train, 25k test, 25k extra
            sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no // 12500] # [12.5K pos, 12.5K neg]*2 then unknown
            alldocs.append(SentimentDocument(words, tags, split, sentiment))

    train_docs = [doc for doc in alldocs if doc.split == 'train']
    test_docs = [doc for doc in alldocs if doc.split == 'test']
    doc_list = alldocs[:]  # for reshuffling per pass

    print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))

    return train_docs, test_docs, doc_list

train_docs, test_docs, doc_list = read_sentimentDocs()

small_corpus = train_docs[:15000]
large_corpus = train_docs + test_docs

print len(train_docs), len(test_docs), len(doc_list), len(small_corpus), len(large_corpus)

Here, we train two Doc2vec model, the parameters can be determined by yourself. We trained on 15k documents for the model1 and 50k documents for the model2. But you should mixed some documents which from the 15k document in model to the model2 as dicussed before.


In [ ]:
# for the computer performance limited, didn't run on the notebook. 
# You do can trained on the server and save the model to the disk.
import multiprocessing
from random import shuffle

cores = multiprocessing.cpu_count()
model1 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)
model2 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)

small_train_docs = train_docs[:15000]
# train for small corpus
model1.build_vocab(small_train_docs)
for epoch in range(50):
    shuffle(small_train_docs)
    model1.train(small_train_docs, total_examples=len(small_train_docs), epochs=1)
model.save("small_doc_15000_iter50.bin")

large_train_docs = train_docs + test_docs
# train for large corpus
model2.build_vocab(large_train_docs)
for epoch in range(50):
    shuffle(large_train_docs)
    model2.train(large_train_docs, total_examples=len(train_docs), epochs=1)
# save the model
model2.save("large_doc_50000_iter50.bin")

For the IMDB training dataset, we train an classifier on the train data which has 25k documents with positive and negative label. Then using this classifier to predict the test data. To see what accuracy can the document vectors which learned by different method achieve.


In [ ]:
import os
import numpy as np
from sklearn.linear_model import LogisticRegression

def test_classifier_error(train, train_label, test, test_label):
    classifier = LogisticRegression()
    classifier.fit(train, train_label)
    score = classifier.score(test, test_label)
    print "the classifier score :", score
    return score

For the experiment one, we use the vector which learned by the Doc2vec method.To evalute those document vector, we use split those 50k document into two part, one for training and the other for testing.


In [ ]:
#you can change the data folder
basedir = "/home/robotcator/doc2vec"

model2 = Doc2Vec.load(os.path.join(basedir, "large_doc_50000_iter50.bin"))
m2 = []
for i in range(len(large_corpus)):
    m2.append(model2.docvecs[large_corpus[i].tags])

train_array = np.zeros((25000, 100))
train_label = np.zeros((25000, 1))
test_array = np.zeros((25000, 100))
test_label = np.zeros((25000, 1))

for i in range(12500):
    train_array[i] = m2[i]
    train_label[i] = 1

    train_array[i + 12500] = m2[i + 12500]
    train_label[i + 12500] = 0

    test_array[i] = m2[i + 25000]
    test_label[i] = 1

    test_array[i + 12500] = m2[i + 37500]
    test_label[i + 12500] = 0

print "The vectors are learned by doc2vec method"
test_classifier_error(train_array, train_label, test_array, test_label)

For the experiment two, the document vectors are learned by the back-mapping method, which has a linear mapping for the model1 and model2. Using this method like translation matrix for the word translation, If we provide the vector for the addtional 35k document vector in model2, we can infer this vector for the model1.


In [ ]:
from gensim.models import translation_matrix
# you can change the data folder
basedir = "/home/robotcator/doc2vec"

model1 = Doc2Vec.load(os.path.join(basedir, "small_doc_15000_iter50.bin"))
model2 = Doc2Vec.load(os.path.join(basedir, "large_doc_50000_iter50.bin"))

l = model1.docvecs.count
l2 = model2.docvecs.count
m1 = np.array([model1.docvecs[large_corpus[i].tags].flatten() for i in range(l)])

# learn the mapping bettween two model
model = translation_matrix.BackMappingTranslationMatrix(large_corpus[:15000], model1, model2)
model.train(large_corpus[:15000])

for i in range(l, l2):
    infered_vec = model.infer_vector(model2.docvecs[large_corpus[i].tags])
    m1 = np.vstack((m1, infered_vec.flatten()))

train_array = np.zeros((25000, 100))
train_label = np.zeros((25000, 1))
test_array = np.zeros((25000, 100))
test_label = np.zeros((25000, 1))

# because those document, 25k documents are postive label, 25k documents are negative label
for i in range(12500):
    train_array[i] = m1[i]
    train_label[i] = 1

    train_array[i + 12500] = m1[i + 12500]
    train_label[i + 12500] = 0

    test_array[i] = m1[i + 25000]
    test_label[i] = 1

    test_array[i + 12500] = m1[i + 37500]
    test_label[i + 12500] = 0

print "The vectors are learned by back-mapping method"
test_classifier_error(train_array, train_label, test_array, test_label)

As we can see that, the vectors learned by back-mapping method performed not bad but still need improved.

Visulization

we pick some documents and extract the vector both from model1 and model2, we can see that they also share the similar geometric arrangment.


In [ ]:
from sklearn.decomposition import PCA

import plotly
from plotly.graph_objs import Scatter, Layout, Figure
plotly.offline.init_notebook_mode(connected=True)

m1_part = m1[14995: 15000]
m2_part = m2[14995: 15000]

m1_part = np.array(m1_part).reshape(len(m1_part), 100)
m2_part = np.array(m2_part).reshape(len(m2_part), 100)

pca = PCA(n_components=2)
reduced_vec1 = pca.fit_transform(m1_part)
reduced_vec2 = pca.fit_transform(m2_part)

In [ ]:
trace1 = Scatter(
    x = reduced_vec1[:, 0],
    y = reduced_vec1[:, 1],
    mode = 'markers+text',
    text = ['doc' + str(i) for i in range(len(reduced_vec1))],
    textposition = 'top'
)
trace2 = Scatter(
    x = reduced_vec2[:, 0],
    y = reduced_vec2[:, 1],
    mode = 'markers+text',
    text = ['doc' + str(i) for i in range(len(reduced_vec1))],
    textposition ='top'
)
layout = Layout(
    showlegend = False
)
data = [trace1, trace2]

fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')

In [ ]:
m1_part = m1[14995: 15002]
m2_part = m2[14995: 15002]

m1_part = np.array(m1_part).reshape(len(m1_part), 100)
m2_part = np.array(m2_part).reshape(len(m2_part), 100)

pca = PCA(n_components=2)
reduced_vec1 = pca.fit_transform(m1_part)
reduced_vec2 = pca.fit_transform(m2_part)

trace1 = Scatter(
    x = reduced_vec1[:, 0],
    y = reduced_vec1[:, 1],
    mode = 'markers+text',
    text = ['sdoc' + str(i) for i in range(len(reduced_vec1))],
    textposition = 'top'
)
trace2 = Scatter(
    x = reduced_vec2[:, 0],
    y = reduced_vec2[:, 1],
    mode = 'markers+text',
    text = ['tdoc' + str(i) for i in range(len(reduced_vec1))],
    textposition ='top'
)
layout = Layout(
    showlegend = False
)
data = [trace1, trace2]

fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')

You probably will see kinds of colors point. One for the model1, the sdoc0 to sdoc4 document vector are learned by Doc2vec and sdoc5 and sdoc6 are learned by back-mapping. One for the model2, the tdoc0 to tdoc6 are learned by Doc2vec. We can see that some of points learned from the back-mapping method still have the relative position with the point learned by Doc2vec.


In [ ]: