Sometimes called "opinion mining", Wikipedia defines sentiment analysis as
Up to now we've used the occurrence of specific words and word patterns to perform test classifications. In this section we'll take machine learning even further, and try to extract intended meanings from complex phrases. Some simple examples include:
However, things get harder with phrases like:
The way this is done is through complex machine learning algorithms like word2vec. The idea is to create numerical arrays, or word embeddings for every word in a large corpus. Each word is assigned its own vector in such a way that words that frequently appear together in the same context are given vectors that are close together. The result is a model that may not know that a "lion" is an animal, but does know that "lion" is closer in context to "cat" than "dandelion".
It is important to note that building useful models takes a long time - hours or days to train a large corpus - and that for our purposes it is best to import an existing model rather than take the time to train our own.
Up to now we've been using spaCy's smallest English language model, en_core_web_sm (35MB), which provides vocabulary, syntax, and entities, but not vectors. To take advantage of built-in word vectors we'll need a larger library. We have a few options:
en_core_web_md (116MB) Vectors: 685k keys, 20k unique vectors (300 dimensions)
or
en_core_web_lg (812MB) Vectors: 685k keys, 685k unique vectors (300 dimensions)
If you plan to rely heavily on word vectors, consider using spaCy's largest vector library containing over one million unique vectors:
en_vectors_web_lg (631MB) Vectors: 1.1m keys, 1.1m unique vectors (300 dimensions)
For our purposes en_core_web_md should suffice.
activate spacyenv
if using a virtual environment
python -m spacy download en_core_web_md
python -m spacy download en_core_web_lg
optional library
python -m spacy download en_vectors_web_lg
optional libraryIf successful, you should see a message like:
**Linking successful**
C:\Anaconda3\envs\spacyenv\lib\site-packages\en_core_web_md -->
C:\Anaconda3\envs\spacyenv\lib\site-packages\spacy\data\en_core_web_md
You can now load the model via spacy.load('en_core_web_md')
Of course, we have a third option, and that is to train our own vectors from a large corpus of documents. Unfortunately this would take a prohibitively large amount of time and processing power.
Word vectors - also called word embeddings - are mathematical descriptions of individual words such that words that appear frequently together in the language will have similar values. In this way we can mathematically derive context. As mentioned above, the word vector for "lion" will be closer in value to "cat" than to "dandelion".
So what does a word vector look like? Since spaCy employs 300 dimensions, word vectors are stored as 300-item arrays.
Note that we would see the same set of values with en_core_web_md and en_core_web_lg, as both were trained using the word2vec family of algorithms.
In [3]:
# Import spaCy and load the language library
import spacy
nlp = spacy.load('en_core_web_lg') # make sure to use a larger model!
In [4]:
nlp(u'lion').vector
Out[4]:
What's interesting is that Doc and Span objects themselves have vectors, derived from the averages of individual token vectors.
This makes it possible to compare similarities between whole documents.
In [5]:
doc = nlp(u'The quick brown fox jumped over the lazy dogs.')
doc.vector
Out[5]:
In [6]:
# Create a three-token Doc object:
tokens = nlp(u'lion cat pet')
# Iterate through token combinations:
for token1 in tokens:
for token2 in tokens:
print(token1.text, token2.text, token1.similarity(token2))
In [7]:
# For brevity, assign each token a name
a,b,c = tokens
# Display as a Markdown table (this only works in Jupyter!)
from IPython.display import Markdown, display
display(Markdown(f'<table><tr><th></th><th>{a.text}</th><th>{b.text}</th><th>{c.text}</th></tr>\
<tr><td>**{a.text}**</td><td>{a.similarity(a):{.4}}</td><td>{b.similarity(a):{.4}}</td><td>{c.similarity(a):{.4}}</td></tr>\
<tr><td>**{b.text}**</td><td>{a.similarity(b):{.4}}</td><td>{b.similarity(b):{.4}}</td><td>{c.similarity(b):{.4}}</td></tr>\
<tr><td>**{c.text}**</td><td>{a.similarity(c):{.4}}</td><td>{b.similarity(c):{.4}}</td><td>{c.similarity(c):{.4}}</td></tr>'))
As expected, we see the strongest similarity between "cat" and "pet", the weakest between "lion" and "pet", and some similarity between "lion" and "cat". A word will have a perfect (1.0) similarity with itself.
If you're curious, the similarity between "lion" and "dandelion" is very small:
In [6]:
nlp(u'lion').similarity(nlp(u'dandelion'))
Out[6]:
In [7]:
# Create a three-token Doc object:
tokens = nlp(u'like love hate')
# Iterate through token combinations:
for token1 in tokens:
for token2 in tokens:
print(token1.text, token2.text, token1.similarity(token2))
It's sometimes helpful to aggregate 300 dimensions into a Euclidian (L2) norm, computed as the square root of the sum-of-squared-vectors. This is accessible as the .vector_norm
token attribute. Other helpful attributes include .has_vector
and .is_oov
or out of vocabulary.
For example, our 685k vector library may not have the word "nargle". To test this:
In [8]:
tokens = nlp(u'dog cat nargle')
for token in tokens:
print(token.text, token.has_vector, token.vector_norm, token.is_oov)
Indeed we see that "nargle" does not have a vector, so the vector_norm value is zero, and it identifies as out of vocabulary.
In [9]:
from scipy import spatial
cosine_similarity = lambda x, y: 1 - spatial.distance.cosine(x, y)
king = nlp.vocab['king'].vector
man = nlp.vocab['man'].vector
woman = nlp.vocab['woman'].vector
# Now we find the closest vector in the vocabulary to the result of "man" - "woman" + "queen"
new_vector = king - man + woman
computed_similarities = []
for word in nlp.vocab:
# Ignore words without vectors and mixed-case words:
if word.has_vector:
if word.is_lower:
if word.is_alpha:
similarity = cosine_similarity(new_vector, word.vector)
computed_similarities.append((word, similarity))
computed_similarities = sorted(computed_similarities, key=lambda item: -item[1])
print([w[0].text for w in computed_similarities[:10]])
So in this case, "king" was still closer than "queen" to our calculated vector, although "queen" did show up!