Suppose we are given a setofword pairs and their associated vector representaion $\{x_{i},z_{i}\}_{i=1}^{n}$, where $x_{i} \in R^{d_{1}}$ is the distibuted representation of word $i$ in the source language, and ${z_{i} \in R^{d_{2}}}$ is the vector representation of its translation. Our goal is to find a transformation matrix $W$ such that $Wx_{i}$ approximates $z_{i}$. In practice, $W$ can be learned by the following optimization prolem:
Tomas Mikolov, Quoc V Le, Ilya Sutskever. 2013.Exploiting Similarities among Languages for Machine Translation
Georgiana Dinu, Angelikie Lazaridou and Marco Baroni. 2014.Improving zero-shot learning by mitigating the hubness problem
In [1]:
import os
from gensim import utils
from gensim.models import translation_matrix
from gensim.models import KeyedVectors
For this tutorial, we'll be training our model using the English -> Italian word pairs from the OPUS collection. This corpus contains 5000 word pairs. Each pair is a English word and corresponding Italian word.
dataset download:
In [2]:
train_file = "OPUS_en_it_europarl_train_5K.txt"
with utils.smart_open(train_file, "r") as f:
word_pair = [tuple(utils.to_unicode(line).strip().split()) for line in f]
print word_pair[:10]
This tutorial uses 300-dimensional vectors of English words as source and vectors of Italian words as target.(those vector trained by the word2vec toolkit with cbow. The context window was set 5 words to either side of the target, the sub-sampling option was set to 1e-05 and estimate the probability of a target word with the negative sampling method, drawing 10 samples from the noise distribution)
dataset download:
In [3]:
# Load the source language word vector
source_word_vec_file = "EN.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt"
source_word_vec = KeyedVectors.load_word2vec_format(source_word_vec_file, binary=False)
In [4]:
#Load the target language word vector
target_word_vec_file = "IT.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt"
target_word_vec = KeyedVectors.load_word2vec_format(target_word_vec_file, binary=False)
training the translation matrix
In [5]:
transmat = translation_matrix.TranslationMatrix(word_pair, source_word_vec, target_word_vec)
transmat.train(word_pair)
print "the shape of translation matrix is: ", transmat.translation_matrix.shape
Prediction Time: for any given new word, we can map it to the other language space by coputing $z = Wx$, then we find the word whose representation is closet to z in the target language space, using consine similarity as the distance metric.
In [6]:
# the piar is (English, Italian), we can see whether the translated word is right or not
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
In [7]:
for k, v in translated_word.iteritems():
print "word ", k, " and translated word", v
In [8]:
words = [("apple", "mela"), ("orange", "arancione"), ("grape", "acino"), ("banana", "banana"), ("mango", "mango")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
print "word ", k, " and translated word", v
In [9]:
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("fish", "cavallo"), ("birds", "uccelli")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
print "word ", k, " and translated word", v
Testing the creation time, we extracted more word pairs from a dictionary built from Europarl(Europara, en-it).we obtain about 20K word pairs and their coresponding word vectors.Or you can download from this.word_dict.pkl
In [10]:
import pickle
word_dict = "word_dict.pkl"
with utils.smart_open(word_dict, "r") as f:
word_pair = pickle.load(f)
print "the length of word pair ", len(word_pair)
In [11]:
import time
test_case = 10
word_pair_length = len(word_pair)
step = word_pair_length / test_case
duration = []
sizeofword = []
for idx in xrange(0, test_case):
sub_pair = word_pair[: (idx + 1) * step]
startTime = time.time()
transmat = translation_matrix.TranslationMatrix(sub_pair, source_word_vec, target_word_vec)
transmat.train(sub_pair)
endTime = time.time()
sizeofword.append(len(sub_pair))
duration.append(endTime - startTime)
In [12]:
import plotly
from plotly.graph_objs import Scatter, Layout
plotly.offline.init_notebook_mode(connected=True)
plotly.offline.iplot({
"data": [Scatter(x=sizeofword, y=duration)],
"layout": Layout(title="time for creation"),
}, filename="tm_creation_time.html")
You will see a two dimensional coordination whose horizontal axis is the size of corpus and vertical axis is the time to train a translation matrix (the unit is second). As the size of corpus increases, the time increases linearly.
To have a better understanding of the principles behind, we visualized the word vectors using PCA, we noticed that the vector representations of similar words in different languages were related by a linear transformation.
In [13]:
from sklearn.decomposition import PCA
import plotly
from plotly.graph_objs import Scatter, Layout, Figure
plotly.offline.init_notebook_mode(connected=True)
In [14]:
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
en_words, it_words = zip(*words)
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# plt.show()
In [15]:
# you can also using plotly lib to plot in one figure
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition = 'top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_number.html')
The figure shows that the word vectors for English number one to five and the corresponding Italian words uno to cinque have similar geometric arrangements. So the relationship between vector spaces that represent these tow languages can be captured by linear mapping. If we know the translation of one and four from English to Spanish, we can learn the transformation matrix that can help us to translate five or other numbers.
In [16]:
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
en_words, it_words = zip(*words)
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
# translate the English word five to Spanish
translated_word = transmat.translate([en_words[4]], 3)
print "translation of five: ", translated_word
# the translated words of five
for item in translated_word[en_words[4]]:
it_words_vec.append(target_word_vec[item])
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# # annote for the translation of five, the red text annotation is the translation of five
# for idx, item in enumerate(translated_word[en_words[4]]):
# plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),
# xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),
# color="red",
# arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)
# plt.show()
In [19]:
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition = 'top'
)
layout = Layout(
showlegend = False,
annotations = [dict(
x = new_it_words_vec[5][0],
y = new_it_words_vec[5][1],
text = translated_word[en_words[4]][0],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[6][0],
y = new_it_words_vec[6][1],
text = translated_word[en_words[4]][1],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[7][0],
y = new_it_words_vec[7][1],
text = translated_word[en_words[4]][2],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
)]
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_numbers.html')
You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word five
, we return top 3
similar words [u'cinque', u'quattro', u'tre']
. We can easily see that the translation is convincing.
Let's see some animals word, the figue show that most of words are also share the similar geometric arrangements.
In [20]:
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("horse", "cavallo"), ("birds", "uccelli")]
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
en_words, it_words = zip(*words)
# remove the code, use the plotly for ploting instead
# pca = PCA(n_components=2)
# new_en_words_vec = pca.fit_transform(en_words_vec)
# new_it_words_vec = pca.fit_transform(it_words_vec)
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# plt.show()
In [21]:
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')
In [22]:
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("horse", "cavallo"), ("birds", "uccelli")]
en_words, it_words = zip(*words)
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
# translate the English word birds to Spanish
translated_word = transmat.translate([en_words[4]], 3)
print "translation of birds: ", translated_word
# the translated words of birds
for item in translated_word[en_words[4]]:
it_words_vec.append(target_word_vec[item])
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# # remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# # annote for the translation of five, the red text annotation is the translation of five
# for idx, item in enumerate(translated_word[en_words[4]]):
# plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),
# xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),
# color="red",
# arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)
# plt.show()
In [23]:
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:5, 0],
y = new_it_words_vec[:5, 1],
mode = 'markers+text',
text = it_words[:5],
textposition = 'top'
)
layout = Layout(
showlegend = False,
annotations = [dict(
x = new_it_words_vec[5][0],
y = new_it_words_vec[5][1],
text = translated_word[en_words[4]][0],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[6][0],
y = new_it_words_vec[6][1],
text = translated_word[en_words[4]][1],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[7][0],
y = new_it_words_vec[7][1],
text = translated_word[en_words[4]][2],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
)]
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')
You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word bird
, we return top 3
similar words [u'uccelli', u'garzette', u'iguane']
. We can easily see that the animals' words translation is also convincing as the numbers.
As dicussion in this PR, Translation Matrix not only can used to translate the words from one source language to another target lanuage, but also to translate new document vectors back to old model space.
For example, if we have trained 15k documents using doc2vec (we called this as model1), and we are going to train new 35k documents using doc2vec(we called this as model2). So we can include those 15k documents as reference documents into the new 35k documents. Then we can get 15k document vectors from model1 and 50k document vectors from model2, but both of the two models have vectors for those 15k documents. We can use those vectors to build a mapping from model1 to model2. Finally, with this relation, we can back-mapping the model2's vector to model1. Therefore, 35k document vectors are learned using this method.
In [2]:
import gensim
from gensim.models.doc2vec import TaggedDocument
from gensim.models import Doc2Vec
from collections import namedtuple
from gensim import utils
def read_sentimentDocs():
SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')
alldocs = [] # will hold all docs in original order
with utils.smart_open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:
for line_no, line in enumerate(alldata):
tokens = gensim.utils.to_unicode(line).split()
words = tokens[1:]
tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost
split = ['train','test','extra','extra'][line_no // 25000] # 25k train, 25k test, 25k extra
sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no // 12500] # [12.5K pos, 12.5K neg]*2 then unknown
alldocs.append(SentimentDocument(words, tags, split, sentiment))
train_docs = [doc for doc in alldocs if doc.split == 'train']
test_docs = [doc for doc in alldocs if doc.split == 'test']
doc_list = alldocs[:] # for reshuffling per pass
print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))
return train_docs, test_docs, doc_list
train_docs, test_docs, doc_list = read_sentimentDocs()
small_corpus = train_docs[:15000]
large_corpus = train_docs + test_docs
print len(train_docs), len(test_docs), len(doc_list), len(small_corpus), len(large_corpus)
Here, we train two Doc2vec model, the parameters can be determined by yourself. We trained on 15k documents for the model1
and 50k documents for the model2
. But you should mixed some documents which from the 15k document in model
to the model2
as dicussed before.
In [ ]:
# for the computer performance limited, didn't run on the notebook.
# You do can trained on the server and save the model to the disk.
import multiprocessing
from random import shuffle
cores = multiprocessing.cpu_count()
model1 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)
model2 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)
small_train_docs = train_docs[:15000]
# train for small corpus
model1.build_vocab(small_train_docs)
for epoch in xrange(50):
shuffle(small_train_docs)
model1.train(small_train_docs, total_examples=len(small_train_docs), epochs=1)
model.save("small_doc_15000_iter50.bin")
large_train_docs = train_docs + test_docs
# train for large corpus
model2.build_vocab(large_train_docs)
for epoch in xrange(50):
shuffle(large_train_docs)
model2.train(large_train_docs, total_examples=len(train_docs), epochs=1)
# save the model
model2.save("large_doc_50000_iter50.bin")
For the IMDB training dataset, we train an classifier on the train data which has 25k documents with positive and negative label. Then using this classifier to predict the test data. To see what accuracy can the document vectors which learned by different method achieve.
In [3]:
import os
import numpy as np
from sklearn.linear_model import LogisticRegression
def test_classifier_error(train, train_label, test, test_label):
classifier = LogisticRegression()
classifier.fit(train, train_label)
score = classifier.score(test, test_label)
print "the classifier score :", score
return score
For the experiment one, we use the vector which learned by the Doc2vec method.To evalute those document vector, we use split those 50k document into two part, one for training and the other for testing.
In [4]:
#you can change the data folder
basedir = "/home/robotcator/doc2vec"
model2 = Doc2Vec.load(os.path.join(basedir, "large_doc_50000_iter50.bin"))
m2 = []
for i in range(len(large_corpus)):
m2.append(model2.docvecs[large_corpus[i].tags])
train_array = np.zeros((25000, 100))
train_label = np.zeros((25000, 1))
test_array = np.zeros((25000, 100))
test_label = np.zeros((25000, 1))
for i in range(12500):
train_array[i] = m2[i]
train_label[i] = 1
train_array[i + 12500] = m2[i + 12500]
train_label[i + 12500] = 0
test_array[i] = m2[i + 25000]
test_label[i] = 1
test_array[i + 12500] = m2[i + 37500]
test_label[i + 12500] = 0
print "The vectors are learned by doc2vec method"
test_classifier_error(train_array, train_label, test_array, test_label)
Out[4]:
For the experiment two, the document vectors are learned by the back-mapping method, which has a linear mapping for the model1
and model2
. Using this method like translation matrix for the word translation, If we provide the vector for the addtional 35k document vector in model2
, we can infer this vector for the model1
.
In [5]:
from gensim.models import translation_matrix
# you can change the data folder
basedir = "/home/robotcator/doc2vec"
model1 = Doc2Vec.load(os.path.join(basedir, "small_doc_15000_iter50.bin"))
model2 = Doc2Vec.load(os.path.join(basedir, "large_doc_50000_iter50.bin"))
l = model1.docvecs.count
l2 = model2.docvecs.count
m1 = np.array([model1.docvecs[large_corpus[i].tags].flatten() for i in range(l)])
# learn the mapping bettween two model
model = translation_matrix.BackMappingTranslationMatrix(large_corpus[:15000], model1, model2)
model.train(large_corpus[:15000])
for i in range(l, l2):
infered_vec = model.infer_vector(model2.docvecs[large_corpus[i].tags])
m1 = np.vstack((m1, infered_vec.flatten()))
train_array = np.zeros((25000, 100))
train_label = np.zeros((25000, 1))
test_array = np.zeros((25000, 100))
test_label = np.zeros((25000, 1))
# because those document, 25k documents are postive label, 25k documents are negative label
for i in range(12500):
train_array[i] = m1[i]
train_label[i] = 1
train_array[i + 12500] = m1[i + 12500]
train_label[i + 12500] = 0
test_array[i] = m1[i + 25000]
test_label[i] = 1
test_array[i + 12500] = m1[i + 37500]
test_label[i + 12500] = 0
print "The vectors are learned by back-mapping method"
test_classifier_error(train_array, train_label, test_array, test_label)
Out[5]:
As we can see that, the vectors learned by back-mapping method performed not bad but still need improved.
In [6]:
from sklearn.decomposition import PCA
import plotly
from plotly.graph_objs import Scatter, Layout, Figure
plotly.offline.init_notebook_mode(connected=True)
m1_part = m1[14995: 15000]
m2_part = m2[14995: 15000]
m1_part = np.array(m1_part).reshape(len(m1_part), 100)
m2_part = np.array(m2_part).reshape(len(m2_part), 100)
pca = PCA(n_components=2)
reduced_vec1 = pca.fit_transform(m1_part)
reduced_vec2 = pca.fit_transform(m2_part)
In [7]:
trace1 = Scatter(
x = reduced_vec1[:, 0],
y = reduced_vec1[:, 1],
mode = 'markers+text',
text = ['doc' + str(i) for i in range(len(reduced_vec1))],
textposition = 'top'
)
trace2 = Scatter(
x = reduced_vec2[:, 0],
y = reduced_vec2[:, 1],
mode = 'markers+text',
text = ['doc' + str(i) for i in range(len(reduced_vec1))],
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')
In [12]:
m1_part = m1[14995: 15002]
m2_part = m2[14995: 15002]
m1_part = np.array(m1_part).reshape(len(m1_part), 100)
m2_part = np.array(m2_part).reshape(len(m2_part), 100)
pca = PCA(n_components=2)
reduced_vec1 = pca.fit_transform(m1_part)
reduced_vec2 = pca.fit_transform(m2_part)
trace1 = Scatter(
x = reduced_vec1[:, 0],
y = reduced_vec1[:, 1],
mode = 'markers+text',
text = ['sdoc' + str(i) for i in range(len(reduced_vec1))],
textposition = 'top'
)
trace2 = Scatter(
x = reduced_vec2[:, 0],
y = reduced_vec2[:, 1],
mode = 'markers+text',
text = ['tdoc' + str(i) for i in range(len(reduced_vec1))],
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')
You probably will see kinds of colors point. One for the model1
, the sdoc0
to sdoc4
document vector are learned by Doc2vec and sdoc5
and sdoc6
are learned by back-mapping. One for the model2
, the tdoc0
to tdoc6
are learned by Doc2vec. We can see that some of points learned from the back-mapping method still have the relative position with the point learned by Doc2vec.
In [ ]: