In [40]:
import matplotlib.pyplot as plt
%matplotlib inline
In [41]:
from gensim.models.word2vec import Word2Vec
In this section, you can decide which model for which corpus you want to use for your analysis. Here by default, it's the Revue des Deux Mondes corpus within the years 1820-1900. We tested over 10-year slides, it's clearly much less effective due to the small size of the French corpora. After the load, put the path to your directory of models, and then select your range in "year in range".
In [42]:
from collections import OrderedDict
models = OrderedDict([
(year, Word2Vec.load('/home/odysseus/Téléchargements/hist-vec-master/r2m/LemCorpR2M_models/{}.bin'.format(year)))
for year in range(1820, 1900, 20)
])
In [43]:
def cosine_series(anchor, query):
series = OrderedDict()
for year, model in models.items():
series[year] = (
model.similarity(anchor, query)
if query in model else 0
)
return series
In [44]:
import numpy as np
import statsmodels.api as sm
def lin_reg(series):
x = np.array(list(series.keys()))
y = np.array(list(series.values()))
x = sm.add_constant(x)
return sm.OLS(y, x).fit()
In [45]:
def plot_cosine_series(anchor, query, w=8, h=4):
series = cosine_series(anchor, query)
fit = lin_reg(series)
x1 = list(series.keys())[0]
x2 = list(series.keys())[-1]
y1 = fit.predict()[0]
y2 = fit.predict()[-1]
print(query)
plt.figure(figsize=(w, h))
plt.ylim(0, 1)
plt.title(query)
plt.xlabel('Year')
plt.ylabel('Similarity')
plt.plot(list(series.keys()), list(series.values()))
plt.plot([x1, x2], [y1, y2], color='gray', linewidth=0.5)
plt.show()
The cell below (which calls the methods above), shows how two words differ from one another through time, with cosine similarity. Here, we show how a list of selected concepts evolves compared to "littérature". You can manually change both below.
In [46]:
testList = ('littérature','poésie', 'science', 'savoir', 'histoire', 'philosophie', 'lettre', 'critique',
'roman', 'théâtre', 'drame', 'esprit', 'langue', 'diplomatie', 'politique', 'morale', 'société',
'pouvoir', 'théologie', 'droit', 'loi', 'méthode', 'génie', 'romantisme', 'réalisme', 'symbolisme',
'naturalisme')
for idx, val in enumerate(testList):
if idx>0:
plot_cosine_series('littérature', val)
The two next cells get the 200 most similar terms to a specific term, from the training models, here "littérature".
In [47]:
def union_neighbor_vocab(anchor, topn=200):
vocab = set()
for year, model in models.items():
similar = model.most_similar(anchor, topn=topn)
vocab.update([s[0] for s in similar])
return vocab
At this point, we'll do the same thing as above, and calculate, for each token in the 200 nearest terms to the main entry, the proximity of this term and its significance. The significance is calculated with the p value, that is to say that, below a certain threshold (0.05) we have a strong likelyhood that the result is sure and significant.
In [48]:
entries={}
for word in testList:
data = []
for token in union_neighbor_vocab(word):
series = cosine_series(word, token)
fit = lin_reg(series)
if fit.pvalues[1] < 0.05:
data.append((token, fit.params[1], fit.pvalues[1]))
entries[word]=data
In this part, we want to show what terms emerge more and more with the main entry, that is to say each word of the given test list. The "slope" is the degree of progress, and the "p" value its efficiency. So here, the main emergence with "littérature" which is significant is "humanisme". All terms seem to be significant, except "fédéralisme", "welschinger", "maniere", "hennet", "réapparition", "deffence", "bourgin", "colonie", "naturalisme", "réalisme", "sillery", "gréco", "compétence", "symbolisme", "catholique", "japonais", "manuel", "romand", "topographie, "organisme", "prédominance". That is to say that those terms can be nearest, but that statistically they are not significant enough to be sure, while the others are more certain.
In this following cell, we show how the top ten most similar vectors change through time compared to the words in the test list : "humanisme" for example seems to be very rare before 1860, and then becomes more and more similar to "littérature". Those show the terms that were not similar in the beginning, but tend to be more and more related to "littérature". You should keep in mind the p values associated to each vector.
In [49]:
import pandas as pd
from IPython.display import Markdown, display
pd.set_option('display.max_rows', 1000)
for word in testList :
display(Markdown("### <i><b>"+word+"</i></b>"))
df1 = pd.DataFrame(entries[word], columns=('token', 'slope', 'p'))
print(df1.sort_values('slope', ascending=False).head(10))
print('\n\n')
for i, row in df1.sort_values('slope', ascending=False).head(10).iterrows():
plot_cosine_series(word, row['token'], 8, 4)
This is the same process here : we want to see which terms tend to disassociate themselves from "littérature" (by default, which you can change with the trained models). Then again, you have to check the p values. "transplantation", "choeur" and "philé" are not considered significant, "chaldéen" is, and "destination", "morceau", etc. are as well. The fact that those are less significant is logical : the fewer the terms, the more erratic their series tend to be.
In [50]:
for word in testList :
display(Markdown("### <i><b>"+word+"</i></b>"))
df2 = pd.DataFrame(entries[word], columns=('token', 'slope', 'p'))
print(df2.sort_values('slope', ascending=True).head(10))
print('\n\n')
for i, row in df2.sort_values('slope', ascending=True).head(10).iterrows():
plot_cosine_series(word, row['token'], 8, 4)
In this part, we show which significant terms tend to be, throughout time, the nearest neighbours to the main entry. That is to say that these vectors follow the same evolution through time as the main entry and are very near to the "littérature" vector. At this stage, we only ask for significant terms (filter above : "if fit.pvalues[1] < 0.05").
In [51]:
def intersect_neighbor_vocab(anchor, topn=2000):
vocabs = []
for year, model in models.items():
similar = model.most_similar(anchor, topn=topn)
vocabs.append(set([s[0] for s in similar]))
return set.intersection(*vocabs)
In [52]:
entries={}
for word in testList:
data = []
for token in intersect_neighbor_vocab(word):
series = cosine_series(word, token)
fit = lin_reg(series)
if fit.pvalues[1] < 0.05:
data.append((token, fit.params[1], fit.pvalues[1]))
entries[word]=data
In [53]:
import pandas as pd
In [54]:
from IPython.display import Markdown, display
for word in testList :
display(Markdown("### <i><b>"+word+"</i></b>"))
df3 = pd.DataFrame(entries[word], columns=('token', 'slope', 'p'))
print(df3.sort_values('slope', ascending=False).head(10))
print('\n\n')
for i, row in df3.sort_values('slope', ascending=False).head(10).iterrows():
plot_cosine_series(word, row['token'], 8, 4)