Unsupervised models in natural language processing (NLP) have a long history but have recently become very popular. Word2vec, GloVe, LSI and LDA provide powerful computational tools to deal with natural language and make exploring and modelling large document collections feasible.
Often evaluating the model output requires an existing understanding of what should come out. For topic models the output should reflect our understanding of the relatedness of topical categories, for instance sports, travel or machine learning. Distributional models of language such as word2vec
and GloVe
should capture some, or ideally all, of the semantics of how language is used.
This is a lot to ask! Not necessarily because it isn't learneable, after all we've learned it, but because we are not necessarily able to represent the desired output as an evaluation function and data set that can be optimised. As an example topic models are often evaluated with respect to the semantic coherence of the topics based on a set of top words from the topics. It is not clear if a set of words such as {cat, dog, horse, pet}
captures the semantics of an animalness or a petsiness fully. Nevertheless these methods are useful in determining if the distributed word representation are capturing some of the information conveyed by words and if a topic model is understandable to a human.
This notebook explores a number of these issues in context and aims to provide an overview of the research that has been done in the past 10 or so years, mostly focusing on topic models.
The notebook is split into three parts
Random collection of other stuff
While preparing the talk and the notebook I experimented with a lot of different software packages and corpora. These are dumped as a somewhat unorganised collection of "other things" at the end of the notebook
We would like to be able to say if a model is objectively good or bad, and compare different models to each other. This requires us to have an objective measure for the quality of the model but many of the tasks mentioned above require subjective evaluation.
In practical applications one needs to evaluate if "the correct thing" has been learned, often this means applying implicit knowledge and "eye-balling". Documents that talk about football should be in the same category and cat is more similar to dog than to pen. Ideally this information should be captured in a single metric that can be maximised. It is not clear how to formulate such a metric however, over the years there has been numerous attempts from various different angles at formulating semantic coherence, none capture the desired outcome fully and there are numerous issues one should be aware of in applying those metrics.
Some of the issues are related to the metrics being used or issues one should be aware of when applying those metrics, but others are related to external factors, like which kind of held out data to use. Natural language is messy, ambiguous and full of interpretation, that's where a lot of the expressive richness comes from. Sometimes trying to cleanse the ambiguity also reduces language to an unnatural form.
Topic models aim to capture the way in which words co-occur in the context of a document and divide the source corpus into some number of (soft) clusters. There are a large number of variations on the topic model, initial work was done be Deerwester in developing Latent Semantic Analysis (LSA/LSI), now the canonical example is Latent Dirichlet Allocation or LDA. The unit upon which topic models work is a sparse document-term matrix depicted below.
Each row is a document, each column is a term and the entries in each cell usually represent the frequency of each term in each document.
In [127]:
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
documents = ['The cat sat on the mat', 'A red cat sat on the mat', 'No cat sat on the mat']
vectoriser = CountVectorizer().fit(documents)
X = vectoriser.transform(documents)
pd.DataFrame(X.A, columns=sorted(vectoriser.vocabulary_.keys(), key=lambda k: vectoriser.vocabulary_[k]))
Out[127]:
In the case of LDA a Bayesian model is fitted using the data from this matrix. Each topic in the model becomes a probability distribution over terms (the columns). Conceptually this is saying that semantic concepts can be represented as probabilities over a set of words. This makes sense as the topic of discussion acts as a limiting factor on the vocabulary one is likely to use, hear or read in the context of that dicussion.
Words relating to political campaining are much less likely to be observed in documents that discuss ice hockey. Notice however that is is unlikely not impossible, it is not the case that it can not ever happen, it is simply statistically less likely to be the case that caucus or polling will be in a document that otherwise discusses the Teemu Selänne retiring. A topic therefore is a probability distribution over the entire vocabulary, indicating how likely each word is to occur within that topic.
The documents the model is built over can be as short as a single sentence (a tweet) or as long as a chapter in a book. Typically very short documents tend to be more difficult to built coherent models over than slightly longer documents.
In [4]:
import pandas as pd
df_fake = pd.read_csv('/usr/local/scratch/data/kaggle/fake.csv')
df_fake[['title', 'text', 'language']].head()
Out[4]:
In [5]:
import numpy as np
df_fake = df_fake.loc[(pd.notnull(df_fake.text)) & (df_fake.language == 'english')]
df_fake.shape
Out[5]:
There is a total of 12357 non empty english language documents, should be enough to build a model. Let's parse the documents using spacy
, getting rid of some non content words and chuck that into gensim
. I'll use the gensim.corpora.MmCorpus
to serialise the text onto disk, this both saves memory and allows random access to the corpus, which will become useful later for creating different splits of the data.
In [4]:
import spacy
import gensim
from gensim.models import LdaModel
from gensim.corpora import Dictionary, MmCorpus
spc = spacy.load('en')
KEEP_POS = set([90, 98, 82, 84, 94]) # NOUN, VERB, ADJ, ADV, PROPN
pipe = spc.pipe(df_fake.text, parse=False, entity=False, n_threads=8)
processed = [[token.lemma_ for token in document if token.pos in KEEP_POS] for document in pipe]
vocabulary = Dictionary(processed)
vocabulary.filter_extremes(no_below=3, no_above=0.5)
In [133]:
MmCorpus.serialize('./fake_news.mm', (vocabulary.doc2bow(doc) for doc in processed))
In [8]:
vocabulary.save('./fake_news.vocab')
In [ ]:
del processed
In [5]:
corpus_fake = MmCorpus('./fake_news.mm')
In [137]:
lda_fake = LdaModel(corpus=corpus_fake, id2word=vocabulary, num_topics=35, chunksize=1500, iterations=200, alpha='auto')
Inspecting the top 6 six word from the model we can certainly identify some structure. Ther are topic about the Flint Michigan water crisis (Topic 11), the Dakota Access Pipeline (Topic 9) protests and the US elections.
In [7]:
pd.DataFrame([[word for rank, (word, prob) in enumerate(words)]
for topic_id, words in lda_fake.show_topics(formatted=False, num_words=6, num_topics=35)].iloc[])
Out[7]:
In [7]:
pd.DataFrame([[word for rank, (word, prob) in enumerate(words)]
for topic_id, words in lda_fake.show_topics(formatted=False, num_words=6, num_topics=35)])
Out[7]:
In [9]:
lda_fake.save('./fake_news_35.lda')
The visualisation in the Termite
paper look very promising, but I've been unable to run the code. The original project has been split into two separate projects a data server and a visualisation client. Unfortunately the data server uses an unknown data format in SQLite databases, and the host server where the data sets ought to be is not operational anymore and the project hasn't been maintained since 2014.
The project also relies on web2py
which at the moment only supports python 2 and there doesn't seem to be any interest in porting it to python 3.
Anyhow, it would seem to be possible to run the project under a python 2 environment.
read_gensim.py
to add --sentence-splitter cmd argbin/apps/SplitSentences.py
to have an extra param for sentence_splitter jar locationgensim
API breaking changesbin/readers/GensimReader.py
line 47 ldamodel.show_topics
bin/readers/GensimReader.py
line 51 topic/term distribution does not neet enumerate
anymorebin/readers/GensimReader.py
line 52 swap term
and value
around - they are the wrong way aroundtermite
makes a lot of assumptions about paths, one needs to be quite careful what the root directory is for running the commands
In [22]:
import sys
sys.path.append('/home/matti/termite-data-server/bin/')
from modellers import GensimLDA
In [29]:
import re
df_fake['text_oneline'] = df_fake.text.apply(lambda s: re.sub(r'\s+', ' ', str(s)))
In [30]:
df_fake[['uuid', 'text_oneline']].to_csv('./fakenews.termite.tsv', sep='\t', header=False, index=False)
In [82]:
py27 = '/home/matti/miniconda3/envs/py27/bin/python'
termite_server_root = '/home/matti/termite-data-server/'
First we need to import the corpus into termite
's own special SQLite format
In [76]:
!mkdir termite;\
cp ./fakenews.termite.tsv ./termite;
cd termite; $py27 /home/matti/termite-data-server/bin/import_corpus.py ./db ./fakenews.termite.tsv
Then we need to export that SQLite DB back into a text corpus, there's some magic file names and path structures that happens here so you can't just use the original file
In [79]:
!cd termite; mkdir corpus; $py27 /home/matti/termite-data-server/bin/export_corpus.py ./db ./corpus/corpus.txt
Then train the LDAModel, is should be possible to skip this and just use any model trained with gensim
In [81]:
%capture
!cd termite; $py27 /home/matti/termite-data-server/bin/train_gensim.py --overwrite ./corpus ./models/
Finally, read in the trained gensim
LDA model to termite
creating all the necessary data structures for the visualisations to work. This computes, among other thigs, term collocations ($N^2$) so it's going to take a while to run, especially for large vocabularies.
If you set all the paths consistently during the previous steps, this should just work. If not, it's likely there will be some FileNotFound
errors.
In [88]:
%capture
!cd termite; cp -r $termite_server_root/tools ./; $py27 /home/matti/termite-data-server/bin/read_gensim.py --overwrite\
--sentence-split /home/matti/termite-data-server/utils/corenlp/SentenceSplitter.jar\
gensim_termite ./models/ ./corpus ./db
To start the server and see the visualisations
In [91]:
!$py27 $termite_server_root/web2py/web2py.py
Some of the work from Termite
has been integrated into pyLDAVis
which is being maintained and has good interoperability with gensim
. Below is an interactive visualisation of the fake news model trained earlier. Just to see how informative the visualisation is overall, I'll train another model on the same dataset but increaase the number of topics quite a lot.
For a good description of what you see in the visualisation you can look at the presenation from the creator himself
In [6]:
lda_fake = LdaModel.load('./fake_news_35.lda')
In [15]:
from gensim.models import LdaModel
import pyLDAvis as ldavis
import pyLDAvis.gensim
ldavis.enable_notebook()
prepared_data = ldavis.gensim.prepare(lda_fake, corpus_fake, vocabulary)
with open('./fake_news_35.lda-LDAVIS.json', 'w') as fh:
fh.write(prepared_data.to_json())
prepared_data
Out[15]:
In [6]:
lda_fake_100 = LdaModel(corpus=corpus_fake, id2word=vocabulary, num_topics=100, alpha='auto')
In [8]:
lda_fake_100.save('./fake_news_100.lda')
In [10]:
prepared_data = ldavis.gensim.prepare(lda_fake_100, corpus_fake, vocabulary)
with open('./fake_news_100.lda-LDAVIS.json', 'w') as fh:
fh.write(prepared_data.to_json())
In [14]:
prepared_data
Out[14]:
Comparing the two visualisation one can make some comforting observations. In the bottom left right corner in both visualisation there is a cluster of topics relating to the 2016 U.S. presidential election. The 100 topic model has split the documents up in slightly more specific terms but otherwise both models have captured those semantics and more importantly both visualisations display those topics consistently in a cluster.
Similarly in the visualisation of the 100 topic model the cluster in the top right hand corner is semantically coherent and similar to the cluster in the bottom left hand corner in the visualisation for the 35 topic model. Again both models have captured the Syrian civil war and related issues and consistently placed those topics close together in the topic panel.
The main problem I find the LDAVis is that the spatial dimensions on the left hand side panel are somewhat meaningless.
The area of the circle shows the prevalence of a topic, but visually determining the relative sizes of circles is difficult to do, so while you do get an understanding of which topics are the most important you can't really determine how much more important those topics are compared to the others.
The second problem is the distance between the topics. While the positioning of the topics to some exent preserves semantic similarity allowing some related topics to form clusters, it is a little difficult to determine exactly how similar the topics are. To be fair this is not something that can be blamed on LDAVis as measuring the semantic similarity of topics and then collapsing the multidimensional similarity vectors into 2 dimensions is not an easy task to do. Nevertheless, one shouldn't read too much into the topic distances. Different algorithms for computing the locations - essentially doing multidimensional scaling.
Perplexity is often used as an example of an intrinsic evaluation measure. It comes from the language modelling community and aims to capture how suprised a model is of new data it has not seen before. This is commonly measured as the normalised log-likelihood of a held out test set
$$ \begin{align} \mathcal{L}(D') &= \frac{\sum_D \log_2 p(w_d;\Theta)}{\mbox{count of tokens}}\\\\ perplexity(D') &= 2^{-\mathcal{L}(D')} \end{align} $$Focussing on the log-likelihood part, this metric is measuring how probable some new unseen data is given the model that was learned earlier. That is to say, how well does the model represent or reproduce the statistics of the held out data.
Thinking back to what we would like the topic model to do, this makes no sense at all. Let's put aside any specific algorithm for inferring a topic model and focus on what it is that we'd like the model to capture. More often than not the desire is for the model to capture concepts that exist in a particular dataset. Well what is a concept and how can it be represented given the pieces we have?
Let me offer a way of thinking about this that would not pass the mustard in a bachelor's class in philosophy. Luckily we're not in philopshy class at the moment.
Take the following two documents that talk about ice hockey. I've highlighted terms that I think are related to the subject matter, you may disagree with my judgement. Notice that among the terms that I've highlighted as being part of the topic of Ice Hockey are words such as Penguin
, opposing
and shots
. None of these on the face of it would appear to "belong" to Ice Hockey, but seeing them in context makes it clear that Penguin
refers to the ice hockey team, shots
refers to disk shaped pieces of vulcanised rubber being launched at the goal at various different speeds and opposing
refers to the opposing team although it might more commonly be thought to belong politics or the debate club.
... began his professional career in 1989–90 with Jokerit of the SM-liiga and played 21 seasons in the National Hockey League (NHL) for the Winnipeg Jets ...
Rinne stopped 27 of 28 shots from the Penguins in Game 6 at home Sunday, but that lone goal allowed was enough for the opposition to break out the Stanley Cup trophy for the second straight season.
Given the terms that I've determined to be a partial description of Ice Hockey (the concept), one could conceivably measure the coherence of that concept by counting how many times those terms occur with each other - co-occur that is - in some sufficiently large reference corpus.
One of course encounters a problem should the reference corpus never refer to ice hockey. A poorly selected reference corpus could for instance be patent applications from the 1800s, it would be unlikely to find those word pairs in that text.
This is precisely what several research papers have aimed to do. To take the top words from the topics in a topic model and measure the support for those words forming a coherent concept / topic by looking at the co-occurrences of those terms in a reference corpus. The research up to now was finally wrapped up into a single paper where the authors develop a coherence pipeline, which allows plugging in all the different methods into a single framework. This coherence pipeline is partially implemented in gensim
, below is a few examples on how to use it.
In [6]:
import spacy
import gensim
from gensim.models import LdaModel
from gensim.corpora import Dictionary, MmCorpus
spc = spacy.load('en')
KEEP_POS = set([90, 98, 82, 84, 94]) # NOUN, VERB, ADJ, ADV, PROPN
pipe = spc.pipe(df_fake.text, parse=False, entity=False, n_threads=8)
processed = [[token.lemma_ for token in document if token.pos in KEEP_POS] for document in pipe]
vocabulary = Dictionary(processed)
vocabulary.filter_extremes(no_below=3, no_above=0.5)
In [10]:
corpus = MmCorpus('./models/fake_news.mm')
In [14]:
lda_fake_35 = LdaModel.load('./models/fake_news_35.lda')
lda_fake_100 = LdaModel.load('./models/fake_news_100.lda')
In [11]:
from gensim.models import CoherenceModel
cm = CoherenceModel(model=lda_fake_35, corpus=corpus,
dictionary=vocabulary, coherence='c_v',
texts=[[w for w in d if w in vocabulary.token2id] for d in processed])
cm.get_coherence()
Out[11]:
In [19]:
cm = CoherenceModel(model=lda_fake_100, corpus=corpus,
dictionary=vocabulary, coherence='c_v',
texts=[[w for w in d if w in vocabulary.token2id] for d in processed])
cm.get_coherence()
Out[19]:
In [12]:
cm = CoherenceModel(model=lda_fake_35, corpus=corpus,
dictionary=vocabulary, coherence='c_uci',
texts=[[w for w in d if w in vocabulary.token2id] for d in processed])
cm.get_coherence()
Out[12]:
In [15]:
cm = CoherenceModel(model=lda_fake_100, corpus=corpus,
dictionary=vocabulary, coherence='c_uci',
texts=[[w for w in d if w in vocabulary.token2id] for d in processed])
cm.get_coherence()
Out[15]:
In [20]:
cm = CoherenceModel(model=lda_fake_35, corpus=corpus,
dictionary=vocabulary, coherence='u_mass',
texts=[[w for w in d if w in vocabulary.token2id] for d in processed])
cm.get_coherence()
Out[20]:
In [21]:
cm = CoherenceModel(model=lda_fake_100, corpus=corpus,
dictionary=vocabulary, coherence='u_mass',
texts=[[w for w in d if w in vocabulary.token2id] for d in processed])
cm.get_coherence()
Out[21]:
Röder et. al Exploring the Space of Topic Coherence Methods, Web Search and Data Mining 2015
Sievert et. al LDAvis: A method for visualizing and interpreting topics ACL 2014 Workshop on Interactive Language Learning, Visualization, and Interfaces
Chuang et. al Termite: Visualization Techniques for Assessing Textual Topic Models, AVI 2012 link
The model used in this notebook is built on the Kaggle Fake News dataset available here.
I am going to start with a slightly silly example that nonetheless nicely illustrates a few important points about evaluating unsupervised models
define what you want out of the model
applying subjective judgement
I did some analysis on the accepted talks to PyData Berlin 2017 to find out what kind of talks were accepted this year. I plotted the results in a wordcloud (github.com/amueller/word_cloud), but was disappointed that the first approach didn't really reveal the thing I was hoping to analyse. The plot just showed general patterns of english language use.
Filtering out high frequency words helped a little bit but the wordcloud still wasn't that informative, it is hardly a surprise that data is a central theme at a PyData conference.
So I made some more adjustments to the model and got something that looks more reasonable.
I am not trying to claim that the last model is a good one, or even a valid one, but it does correspond to my previously held beliefs of the contents of the conference. It is not surprising that is the case since I arrived at the model by iterating through a number of models that I found to be unsatisfactory, the problem is that I never I actually defined what satisfactory means, there was never an explicitly stated goal towards which I was driving.
This is extremely important to keep in mind as evaluation metrics for unsupervised models often have in built assumptions about what a good model looks like. Those assumptions may or may not be true for your use case. Some metrics aim to satisfy internal constraints
So let's start with what we would like to model about text in an unsupervised manner
I will focus on evaluation topic models and models of distributional semantics.
open access research papers
data visualisation is not my core research area
In [104]:
import numpy as np
import scipy
from matplotlib import pyplot as plt
fig, ax = plt.subplots(figsize=(1000/72, 750/72), dpi=72)
topics = ['Sports', 'Machine Learning', 'Celebrity', 'Fashion', 'Current Affairs', 'Tennis', 'Medicine', 'Technology', 'Security']
# centers = np.random.randint(low=0, high=20, size=(len(topics), 2))
for topic_name, center in zip(topics, centers):
topic = np.random.normal(loc=center, scale=1.0, size=(10, 2))
dots = ax.scatter(topic[:, 0], topic[:, 1], alpha=0.4)
bbox_props = dict(boxstyle="circle, pad=0.3", fc=dots.get_facecolor().ravel(), ec="none", alpha=0.1, lw=1)
t = ax.text(*topic.mean(axis=0), topic_name, ha="center", va="center", rotation=0,
size=16, bbox=bbox_props)
plt.axis("off");
plt.show()
# plt.savefig('../assets/unsupervised-models/ideal-topics.png')
This is what could be called a coherent interpretable model
The problem here is that the "model" above is entirely made up, and the division is somewhat non sensical.
Topic Models have a number of ways of being evaluated, including
Perplexity is a metric for the goodness of fit, it measures the log likelihood of held out data.
$$ 2^{-\sum_x \tilde p(x)\,log_2\,p(x)}$$The aim is to capture how well the current estimated probability of words predicts the probability of words in a held out dataset. This measure is used internally by topic models the measure the progress of learning the topics. It is not suitable for human evaluation as a model with low perplexity does not necessarily correspond to a model that is interpretable or informative (Reading Tea Leaves: How Humans Interpret Topic Models).
There is a review of internal evaluation measures in Wallach et. al Evaluation methods for topic models. In ICML. 2009 - these measures borrow from the language modelling research.
Topic ID | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
model | |||||||||||||||
1 | get | i'll | light | at | he | come | got | go | was | blues | will | oh | dance | over | she |
1 | gonna | never | night | down | his | me, | they | we're | now | are | let | do | (i | been | her |
1 | yeah, | see | shine | old | him | better | up | hey | never | as | heart | want | back | oh, | she's |
1 | yeah | you're | sun | out | man | if | ain't | - | have | day | our | baby | ah | long | girl |
1 | wanna | way | tonight | run | he's | do | out | let's | were | good | are | can't | bring | gone | woman |
NaN | |||||||||||||||
2 | oh | want | get | out | she | down | light | am | wanna | blues | if | are | hey, | he | was |
2 | baby | do | ya | gloom | her | look | our | thro' | up | old | you, | they | ah | his | now |
2 | gonna | if | ain't | off | she's | down, | will | lord | get | new | would | where | ha | him | out |
2 | oh, | can't | got | black | girl | at | as | run | la | - | could | home | ah, | man | one |
2 | yeah | i'll | na | them | got | stop | rain | jesus | let's | hey | me, | people | my, | he's | at |
The topic coherence model combines a number of papers into one framework that allows evaluating the coherence of topics inferred by a topic model. In the context of this work coherence is defined as the mutual support of sets of facts - facts are represented by the top N words from a topic.
create tuples from the top N words in a topic
{(game), (ball)}, {(team), (ball)}
{(game, ball)}, {(team, ball)}
measure the probability of those from a reference corpus
calculate a confirmation measure per tuple
aggregate over all the tuples mean
where PMI is
$$ PMI(w_i, w_j) = \log \frac{P(w_i, w_j) + \epsilon}{P(w_i)P(w_j)}$$As pointed out in Reading Tea Leaves: How Humans Interpret Topic Models [emphasis mine].
We emphasize that not measuring the internal representation of topic models is at odds with their presentation and development. Most topic modeling papers display qualitative assessments of the inferred topics or simply assert that topics are semantically meaningful ...
As we can see above, it is not immediately clear how the topics are semantically meaningful, even though the fit to the training data is good.
Topic ID | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
model | |||||||||||||||
1 | get | i'll | light | at | he | come | got | go | was | blues | will | oh | dance | over | she |
1 | gonna | never | night | down | his | me, | they | we're | now | are | let | do | (i | been | her |
1 | yeah, | see | shine | old | him | better | up | hey | never | as | heart | want | back | oh, | she's |
1 | yeah | you're | sun | out | man | if | ain't | - | have | day | our | baby | ah | long | girl |
1 | wanna | way | tonight | run | he's | do | out | let's | were | good | are | can't | bring | gone | woman |
NaN | |||||||||||||||
2 | oh | want | get | out | she | down | light | am | wanna | blues | if | are | hey, | he | was |
2 | baby | do | ya | gloom | her | look | our | thro' | up | old | you, | they | ah | his | now |
2 | gonna | if | ain't | off | she's | down, | will | lord | get | new | would | where | ha | him | out |
2 | oh, | can't | got | black | girl | at | as | run | la | - | could | home | ah, | man | one |
2 | yeah | i'll | na | them | got | stop | rain | jesus | let's | hey | me, | people | my, | he's | at |
Chang et. al Reading Tea Leaves: How Humans Interpret Topic Models
Find the intruding word in sets of top words picked from a topic in a topic model plus an intruder that has low probability for the current topic but high probability for some other topic. The more the intruder words as judged by humans varies, the less coherent the model is.
{dog, cat, horse, apple, pig, cow}
Douven et. al Measuring Coherence https://www.researchgate.net/publication/220607660_Measuring_coherence
Chang et. al Reading Tea Leaves: How Humans Interpret Topic Models
The more the intruder words as judged by humans varies, the less coherent the model is.
I need word sets that have several equally plausible interpretations
{dog, cat, horse, apple, pig, cow}
{dog, carrot, horse, apple, pig, corn}
{cat, tuna, yarn, horse, stable, hay}
{cat, airport, yarn, horse, security, hay}
Supervised models are trained on labelled data and optimised to maximise an external metric such as log loss
or accuracy
. Unsupersived models on the other hand at their simplest do frequency counting of terms in context possibly aiming to fit a predefined parameterized distribution to be consistent with the statistics of some unlabelled data set.
More recently maximising the similarity of words that appear in similar contexts have been put into a neural network context. Evaluating the trained model often starts by "eye-balling" the results, i.e. checking that your own expections of similarity are fullfilled by the model.
Documents that talk about football should be in the same category and "cat" is more similar with "dog" than with "pen". Tools such as pyLDAvis
and gensim
provide many different ways to get an overview of the learned model or a single metric that can be maximised: topic coherence
, perplexity
, ontological similarity
, term co-occurrence
, word analogy
. Using these methods without a good understanding of what the metric represents can give misleading results. The unsupervised models are also often used as part of larger processing pipelines, it is not clear if these intrinsic evaluation measures are approriate in such cases, perhaps the models should instead be evaluated against an external metric like accuracy
for the entire pipeline.
In this talk I will give an intuition of what the evaluation metrics are trying to achieve, give some recommendations for when to use them, what kind of pitfalls one should be aware of when using LDA or word emdeddings and the inherent difficulty in measuring or even defining semantic similarity concisely.
Is "cat" more similar to "tiger" than to "dog"? Ideally this information should be captured in a single metric that can be maximised.
Models like word2vec and GloVe are common ways of creating dense vector representations of word meaning. These allow
You will learn:
Questions and Comments:
Ideally we would be able to say whether a model is intrinsically -- or objectively -- good or bad. Measuring the quality of a topic model, or some other distributional/distributed model, is difficult to do intrinsically, mainly because an objective view for the goodness of the model is elusive. The similarity of pairs of words, or the assignment of documents into topics is contextual; cat is close to dog if the context is bicycle but what if the context is kitten, mouse or ball.
This may seem like a silly example but this is how a distributional composition was evaluated not that long ago. Evaluating topic models is easier than evaluating the similarity of certain word pairs as the topic model itself provides some context. Typically the evalution of model is done using a list of top words from topics.
In [6]:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
def randrange(n, vmin, vmax):
'''
Helper function to make an array of random numbers having shape (n, )
with each number distributed Uniform(vmin, vmax).
'''
return (vmax - vmin)*np.random.rand(n) + vmin
fig = plt.figure(figsize=(25, 8))
ax = fig.add_subplot(121, projection='3d')
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
n, b = 2, 8
# For each set of style and range settings, plot n random points in the box
# defined by x in [23, 32], y in [0, 100], z in [zlow, zhigh].
for c, m, zlow, zhigh in [('r', 'o', -50, -25), ('b', '^', -b, -5)]:
xs = randrange(n, 23, 32)
ys = randrange(n, 0, 100)
zs = randrange(n, zlow, zhigh)
ax.scatter(xs, ys, zs, c=c, marker=m)
ax2.scatter(xs, ys, c=c, marker=m)
ax3.scatter(xs, np.zeros(ys.shape), c=c, marker=m)
In [7]:
plt.show()
In [13]:
x
Out[13]:
In [46]:
np.arange(binom.ppf(0.01, n, p),
binom.ppf(0.99, n, p))
Out[46]:
In [67]:
from scipy.stats import binom
import numpy as np
fig, ax = plt.subplots(1, 1)
for n, p in [(20, 0.5), (30, 0.5), (40, 0.5)]:
x = np.arange(binom.ppf(0.01, n, p),
binom.ppf(0.99, n, p), step=1)
ax.plot(np.arange(0, 30), binom.pmf(np.arange(0, 30), n, p), label=f'(n={n}, p={p})')
plt.legend()
plt.show()
intrinsic evaluation is nearly impossible
river delta, river estuary (suisto, estuaari?) - why doesn't finnish have an equivalent for estuary
king - man + woman == queen
cider - alchohol == applejuice
apple + drink = cider
cider - apple + (hops + barley) == beer
good
closer to apple
than it is to bad
?good
closer to maybe
than it is to and
?
In [1]:
from spacy import en
In [3]:
spc = en.English()
In [11]:
len(spc.vocab)
Out[11]:
In [10]:
lex_good = spc.vocab['good']
lex_bad = spc.vocab['bad']
lex_good.vector - lex_bad.vector
Out[10]:
https://www.youtube.com/watch?v=uLgn3geod9Q (How a dictionary writer defines English)
"when we revise a dictionary, you go through it A-Z and you take all of the instances for the word that you're looking at. You're mathing up the word and its contextual use ... "
antidisestablishmentarianism
Demonstration of the Topic Coherence model in gensim
.
Topic Coherence