In this notebook, I will demonstrate several text analytics techniques that can be used to analyze various text corpora in order to extract various interesting insights from the text. Namely, throughout this notebook, I will use The Sherlock Holmes Books Collection to show how to (a) calculate various textual statistics; (b) create the social network among entities that appear in the books; (c) use a topic model to discover abstract topics in the text; and (d) utilize Word2Vec to find connections among various section of the text, and do some predictions.
To perform the text analytics presented in this notebook, we will use NLTK Python package, GraphLab Create's SFrame and SGraph objects, as well as GraphLab's text analytics toolkit, and Word2Vec deep learning inspired model implemented in the Gensim Python package.
The notebook is divided to the following sections:
Each section can be executed independently. So feel free to skip ahead, just remember to import all the required packages, and define all the needed functions.
Required Python Packages:
Let's do some text analytics!
Before we begin, make sure you have installed all the required Python packages. (The instructions below use pip. You can use easy_install, too.) Also, consider using virtualenv for a cleaner installation experience instead of sudo. I also recommend to run the code via IPython Notebook.
$ sudo pip install --upgrade gensim $ sudo pip install --upgrade nltk $ sudo pip install --upgrade graphlab-create $ sudo pip install --upgrade pyldavis
You will need a product key for GraphLab Create, and to make Stanford Named Entity Recognizer work with Pyhton NLTK.
After installing NLTK and from an interactive shell download the punkt model by importing nltk and running nltk.download(). From the resulting interactive window navigate to the model tab and select punkt and download to your system.
To prepare the Stanford Named Entity Recognizer to work in your system make sure you read the following links : [1 - How to] (http://textminingonline.com/how-to-use-stanford-named-entity-recognizer-ner-in-python-nltk-and-other-programming-languages) [2 - API] (http://www.nltk.org/api/nltk.tag.html#module-nltk.tag.stanford) [3 - Stanford Parser FAQ] (http://nlp.stanford.edu/software/parser-faq.shtml) [4 - download] (http://nlp.stanford.edu/software/CRF-NER.html) 5 - In the extracted directory it seems to help to run the stanford-ner.jar file in some computer configurations.
Take note of what directory you downloaded and extracted the files to as you will need the location to pass the classifier and Stanford NER as arguments later in this notebook.
Also in case you haven’t already, make sure you are running the latest [Java JDK] (http://www.oracle.com/technetwork/java/javase/downloads/index.html)
For a quick test you can open an ipython shell and try the following:
from nltk.tag import StanfordNERTagger st = StanfordNERTagger(‘[path_to_your_downloaded_package_classifiers_directory]/english.all.3class.distsim.crf.ser.gz', '[path_to_your_downloaded_package_root_directory]/stanford-ner.jar') st.tag('When we turned him over, the Boots recognized him at once as being the same gentleman who had engaged the room under the name of Joseph Stangerson.'.split())
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." -The Adventure of the Copper Beeches
Throughout this notebook, we will be analyzing Sherlock Holmes's stories collection. So first, we will download the stories in ASCII format from the sherlock-holm.es website. sherlock-holm.es website contains over sixty downloadable stories, we will use the following code to download the stories and insert them into a SFrame object.
Important Note: in some countries, such as the U.S., few of Sherlock Holmes's books & stories are still under copyright restrictions. For more information, please advise the following website, and read the guidelines that appear in the end of sherlock-holm.es ASCII download page.
In [ ]:
import re
import urllib2
import graphlab as gl
BASE_DIR = "/Users/pablo/fromHomeMac/sherlock/data" # NOTE: Update BASE_DIR to your own directory path
books_url = "http://sherlock-holm.es/ascii/"
re_books_links = re.compile("\"piwik_download\"\s+href=\"(?P<link>.*?)\">(?P<title>.*?)</a>", re.MULTILINE)
html = urllib2.urlopen(books_url).read()
books_list = [m.groupdict() for m in re_books_links.finditer(html)]
print books_list
We got the books' titles and links, now let's download the books' texts.
In [ ]:
# Filter books due to copyright issues. In this code, we filtered "The Complete Canon", “Case-Book of Sherlock Holmes” books, and
# "The Canon — U.S. edition" book (For more information please read the note above).
filtered_books = set(["The Complete Canon", "The Case-Book of Sherlock Holmes", "The Canon — U.S. edition" ])
books_list = filter(lambda d: d['title'] not in filtered_books, books_list )
#Download books' texts (to not overload the website we download the text in batch and not in parallel)
for d in books_list:
d['text'] = urllib2.urlopen("http://sherlock-holm.es" + d['link']).read().strip()
Let's load the dict list into a SFrame object.
In [ ]:
sf = gl.SFrame(books_list).unpack("X1", column_name_prefix="")
sf.save("%s/books.sframe" % BASE_DIR)
sf.head(3)
Out[ ]:
In this section, I will demonstrate how it is very straight-forward to utilize GraphLab Create SFrame object to calculate & visualize various statistics.
In previous section, we created a SFrame object which consists of 64 texts. Let us first load the SFrame object.
In [ ]:
import graphlab as gl
import re
BASE_DIR = "/Users/pablo/fromHomeMac/sherlock/data" # NOTE: Update BASE_DIR to your own directory path
gl.canvas.set_target('ipynb')
sf = gl.load_sframe("%s\\books.sframe" % BASE_DIR)
Using Python, it is very easy to calculate the number of characters in a text, we just need to use the built-in len function. Let's calculate the number of characters in each downloaded text using the the len function and SArray's apply function (notice that each column in a SFrame object is a SArray object).
In [ ]:
sf['chars_num'] = sf['text'].apply(lambda t: len(t))
sf.head(3)
Out[ ]:
Let's use the show function to visualize the distribution of text length in each one of our downloaded text.
In [ ]:
sf['chars_num'].show()
We can see that the mean characters number in the download stories is 95020.42, and the maximal number of characters in a story is 662,242 characters. Let's also calculate the number of words in each text. Calculating the number of words in a text is little trickier and there are several methods to perform this task. Using one of the following methods:
In [ ]:
text = """I think that you know me well enough, Watson, to understand that I am by no means a nervous man. At the same time,
it is stupidity rather than courage to refuse to recognize danger when it is close upon you."""
#using the split function
print text.split()
In [ ]:
#Using NLTK
#Note: Remember to download the NLTK's punkt package by running nltk.download() from the Interactive Python Shell
import nltk
print nltk.word_tokenize(text)
You can see that both using the split function, or using the regular expression work pretty-well. However, it is important to notice, that the first regular expression can mistakenly split words, such as "S.H." into two words, while the split function doesn't remove punctuation. Therefore, if we want to be precise, we can use the NLTK's tokenize package and remove punctuation from the results. Nevertheless, for our case, it is good enough to use the regular expression method to count words.
In [ ]:
sf['words_num'] = sf['text'].apply(lambda t: len(re_words_split.findall(t)))
We can also use NLTK to count the number of sentences in each story.
In [ ]:
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
def txt2sentences(txt, remove_none_english_chars=True):
"""
Split the English text into sentences using NLTK
:param txt: input text.
:param remove_none_english_chars: if True then remove none English chars from text
:return: string in which each line consists of single sentence from the original input text.
:rtype: str
"""
# decode to utf8 to avoid encoding problems - if someone has better idea how to solve encoding
# problem I will love to learn about it.
txt = txt.decode("utf8")
# split text into sentences using NLTK package
for s in tokenizer.tokenize(txt):
if remove_none_english_chars:
#remove none English chars
s = re.sub("[^a-zA-Z]", " ", s)
yield s
sf['sentences_num'] = sf['text'].apply(lambda t: len(list(txt2sentences(t))))
In [ ]:
sf[['chars_num','words_num','sentences_num']].show()
Until now I calculated very basic text statistics. Let's try to do something more complicated like count the number of time the words 'Sherlock', 'Watson', and 'Elementary' appeared in each story. We will do it using GraphLab's text_analytics.count_words toolkit.
Note: To count the frequency a word appears in a text, one can also consider using the collection.Counter function.
In [ ]:
sf['words_count'] = gl.text_analytics.count_words(sf['text'], to_lower=True)
sf['sherlock_count'] = sf['words_count'].apply(lambda d: d.get('sherlock',0))
sf['watson_count'] = sf['words_count'].apply(lambda d: d.get('watson',0))
sf['elementary_count'] = sf['words_count'].apply(lambda d: d.get('elementary',0))
sf[['sherlock_count', 'watson_count', 'elementary_count']].show()
It is nice to see that the mean number of times the word 'Sherlock' appear in the stories is 9.609 times while the mean the word 'Watson' appear is only 1.734. Moreover, there are stories, such as The Adventure of the Lion's Mane that the word 'Sherlock' doesn't appear even once.
Let's try to use simple linear regression to predict the number of times the word 'Sherlock' appear in a text based on the number of time the word 'Watson' appear in the text.
In [ ]:
linear_reg = gl.linear_regression.create(sf, target='sherlock_count', features=['watson_count'])
linear_reg.show()
According to the simple linear regression, we have the following equation:
There are a lot of other really interesting insights that one can discover using similar methodology. I leave the reader to discover these insights by themselves. Let's move to the next section and create some nice graphs using various entity extraction tools.
"Listen, what I said before John, I meant it. I don’t have friends; I’ve just got one." -Sherlock, The Hounds of Baskerville, 2012
One of my main fields of interest are social networks. I love to study and visualize graphs of various types of networks. One of the nice studies that I read not long ago showes that it is possible to create the social network of book characters. For example, Miranda et al. built and analyzed a social network utilizing the Odyssey of Homer.
To manually create a precise social network between Sherlock Holmes characters, we can read the stories and whenever two characters have a conversation, or appear in the same scene, we add to the network nodes with the two characters names (if there are not in the network already), and create a link between the two characters. In case, we want to create a weighted social network, we can also add a weight to each link with the number of times each two characters talked to each other.
When processing a large text corpus, manually using this process to construct a social network can very time-consuming. Therefore, we would like to perform this process automatically. One of the ways to consturct the social network is by using various NLP algorithms that analyze the text and "understand" the relationships between two entities. However, I am not familiar with open source tools that can analyze a text corpus and infer the connections between two entities with high precision.
In this section, I will demonstrate some very simple techniques that can be utilized to study the social connections among characters in Sherlock Holmes stories. These techniques won't create the most precise social network. However, the created network is sufficient to observe some interesting insights about the relationships among the stories' characters.
Using this techniques, we will split the downloaded Sherlock Holmes stories into sentences, and using a predefined list of names of book characters we will create a social network with links among the stories characters by adding a link between each two characters that appear in the same sentence. Let start constructing the social network by splitting the stories into sentences.
In [ ]:
import graphlab as gl
import re,nltk
gl.canvas.set_target('ipynb')
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
def txt2sentences(txt, remove_none_english_chars=True):
"""
Split the English text into sentences using NLTK
:param txt: input text.
:param remove_none_english_chars: if True then remove none English chars from text
:return: string in which each line consists of single sentence from the original input text.
:rtype: str
"""
txt = txt.decode("utf8")
# split text into sentences using nltk packages
for s in tokenizer.tokenize(txt):
if remove_none_english_chars:
#remove none English chars
s = re.sub("[^a-zA-Z]", " ", s)
yield s
sf = gl.load_sframe("%s/books.sframe" % BASE_DIR)
sf['sentences'] = sf['text'].apply(lambda t: list(txt2sentences(t)))
In [ ]:
sf_sentences = sf.flat_map(['title', 'text'], lambda t: [[t['title'],s.strip()] for s in txt2sentences(t['text'])])
sf_sentences = sf_sentences.rename({'text': 'sentence'})
re_words_split = re.compile("(\w+)")
#split each sentence into words
sf_sentences['words'] = sf_sentences['sentence'].apply(lambda s:re_words_split.findall(s))
sf_sentences.save("%s/sentences.sframe" % BASE_DIR)
sf_sentences.head(3)
Out[ ]:
We created a SFrame named sf_sentences in which each row contains a single sentence. Now let's find out which two or more characters from the following link appear in the same sentences. Notice that we only use the characters unique names so we don't mix up between characters with similar names. For example, the name Holmes can represent both Sherlock Holmes and Mycroft Holmes.
In [ ]:
main_characters_set = set(["Irene","Mycroft","Lestrade","Sherlock","Moran","Moriarty","Watson" ])
sf_sentences['characters'] = sf_sentences['words'].apply(lambda w: list(set(w) & main_characters_set))
Now, the 'characters' column contain the names of the main characters that appear in the same sentences together. Let's use this information to create the characters social network by constructing a SGraph object.
In [ ]:
import itertools
from collections import Counter
from graphlab import SGraph, Vertex, Edge
def get_characters_graph(sf, min_edge_strength=1):
"""
Constructs a social network from the an input SFrame. In the social network the verticies are the characters
and the edges are only between characters that appear in the same sentence at least min_edge_strength times
:param sf: input SFrame object that contains 'characters' column
:param min_edge_strength: minimal connetion strength between two characters.
:return: SGraph object constructed from the input SFrame. The graph only contains edges with
the at least the input minimal strength between between the characters.
:rtype: gl.SGraph
"""
#filter sentences with less than two characters
sf['characters_num'] = sf['characters'].apply(lambda l: len(l))
sf = sf_sentences[sf['characters_num'] > 1]
characters_links = []
for l in sf['characters']:
# if there are more than two characters in the same sentences. Create all link combinations between
# all the characters (order doesn't matter)
characters_links += itertools.combinations(l,2)
#calculating the connections strength between each two characters
c = Counter(characters_links)
g = SGraph()
edges_list = []
for l,s in c.iteritems():
if s < min_edge_strength:
# filter out connections that appear less than min_edge_strength
continue
edges_list.append(Edge(l[0], l[1], attr={'strength':s}))
g = g.add_edges(edges_list)
return g
g = get_characters_graph(sf_sentences)
g.show(vlabel="__id", elabel="strength", node_size=200)
According to Sherlock's social network, it can be noticed that Sherlock has two main social circles. The first one is circle of friends that include Mycroft and Lestrade. Additionally, he has a circle of enemies that include Moriaty and Moran. Additionally, we can notice Watson is strongly connected to Sherlock and Sherlock's nemesis Moriaty.
Let's repeat the experiments only this time we also add minor characters from the following link.
In [ ]:
minor_characters_set = set(["Irene","Mycroft","Lestrade","Sherlock","Moran","Moriarty","Watson","Baynes","Billy","Bradstreet","Gregson"
,"Hopkins","Hudson","Shinwell","Athelney","Mary","Langdale","Toby","Wiggins"])
sf_sentences['characters'] = sf_sentences['words'].apply(lambda w: list(set(w) & minor_characters_set))
sf_sentences['characters_num'] = sf_sentences['characters'].apply(lambda l: len(l))
sf_sentences = sf_sentences[sf_sentences['characters_num'] > 1]
g = get_characters_graph(sf_sentences)
g.show(vlabel="__id", elabel="strength", node_size=200)
We got a more complex social network with the additional minor characters. I believe this social network graph can be improved by increasing the scope of characters search from single sentence to multiple sentences, or by using characters additional names and nick names. I leave the reader to try to improve the graph by themselves.
One of the disadvantages of the above method is that you need a predefined list of names to create the social network. However, in many cases this list is unavailable. Therefore, we need another method to find entities in the text. One common method to achieve this is using Named Entity Recognition (NER). By using NER algorithms, we can classify elements in the text into pre-defined categories, such as the names of persons, organizations, and locations. There are many tools that can perform NER, such as OpenNLP, Stanford Named Entity Recognizer, Rosette Entity Extractor. In this notebook, we will use the Stanford Named Entity Recognizer via NLTK. We will use NER algorithms to automatically construct an entity list of the most common characters of the book.
Please note that making NLTK run Stanford Named Entity Recognizer can be non-trivial. For more details, on how to make NLTK work with Stanford Named Entity Recognizer please read the information provided in the following links 1,2, & 3.
NOTE: running the next code section can take several minutes.
In [ ]:
from nltk.tag import StanfordNERTagger
sf_books = gl.load_sframe("%s/books.sframe" % BASE_DIR)
#IMPORTANT: The directory that include the Stanford Named Entity Recognizer files it need to be updated according
# to the local installation directory
#STANFORD_DIR = BASE_DIR + "/stanford-ner-2015-06-16/"
#need to insert as parameters the stanford-ner.jar and the type of classifier we want to use
st = StanfordNERTagger('/Users/pablo/fromHomeMac/sherlock/stanford-ner-2015-04-20/classifiers/english.all.3class.distsim.crf.ser.gz',
'/Users/pablo/Downloads/stanford-ner-2015-04-20/stanford-ner.jar')
st.java_options = "-Xmx4096m"
sf_books['sentences'] = sf_books['text'].apply(lambda t: list(txt2sentences(t)))
sf_books['words'] = sf_books['sentences'].apply(lambda l: [re_words_split.findall(s) for s in l])
sf_books['NER'] = sf_books['words'].apply(lambda w: st.tag_sents(w))
sf_books['person'] = sf_books['NER'].apply(lambda n: [e[0] for s in n for e in s if e[1] == 'PERSON'])
person_list = []
for p in sf_books['person']:
person_list += p
print len(set(person_list))
In [ ]:
from collections import Counter
c = Counter(person_list)
# We are removing some mistken classified words, too common names, and etc. to make the constructed social network
# more readable.
characters_set = set(i[0] for i in c.most_common(200)) - set(['the', 'You', 'Mrs', 'He', 'Dr', 'me','did', 'Mr',
'Now', 'My', 'Miss', 'of', 'Sir', 'Here', 'All', 'Our', 'sir',
'man', 'father', 'What', 'There', 'When', 'no', 'Lord', 'you', 'St',
'John', 'James', 'Holmes', 'Arthur', 'Conan', 'Doyle', 'Lady'])
sf_sentences = gl.load_sframe("%s/sentences.sframe" % BASE_DIR)
sf_sentences['characters'] = sf_sentences['words'].apply(lambda w: list(set(w) & characters_set))
sf_sentences['characters_num'] = sf_sentences['characters'].apply(lambda l: len(l))
sf_sentences = sf_sentences[sf_sentences['characters_num'] > 1]
g = get_characters_graph(sf_sentences, min_edge_strength=3)
print g.summary()
g.show(vlabel="__id", elabel="strength", node_size=200)
In [ ]:
# adding a function to clean the graph as in some cases the Stanford NER maps 'I' as a person.
def clean_graph(g, remove_entities_set):
vertices = g.vertices[g.vertices["__id"].apply(lambda v: v not in remove_entities_set)]
edges = g.edges[g.edges.apply(lambda e: e["__src_id"] not in remove_entities_set and e["__dst_id"] not in remove_entities_set)]
return gl.SGraph(vertices, edges)
In [ ]:
#cleaning the graph and displaying it again
g = clean_graph(g, {"I"})
g.show(vlabel="__id", elabel="strength", node_size=200)
The NER algorithm did pretty good job, and most of the names of the identified entities looks logical (at least to me). Additionally, we can understand the link between the various book characters. We can also notice that in many of the graph's components that have only two vertices the connection is between each characfter first and it's last names. Let use GraphLab graph_analytics toolkit and focus on the social network's largest component.
In [ ]:
def get_graph_largest_compnent(g):
"""
Returns a graph with the largest component of the input graph
:param g: input graph (SGraph object)
:return: a graph of the largest component in the input object
:rtype: gl.SGraph
"""
cc = gl.connected_components.create(g)
#add each vertices its component id
g.vertices['component_id'] = cc['graph'].vertices['component_id']
# calculate the component id of the largest component
largest_component_id = cc['component_size'].sort('Count', ascending=False)[0]['component_id']
largest_component_verticies = g.vertices.filter_by(largest_component_id, 'component_id')['__id']
h = g.get_neighborhood(largest_component_verticies, 1)
return h
h = get_graph_largest_compnent(g)
h.show(vlabel="__id", elabel="strength", node_size=300)
In [ ]:
h = clean_graph(g, {"I"})
h.show(vlabel="__id", elabel="strength", node_size=300)
According the above graph that can be created almost automatically. We can easily identify the main characters in the stories. Additionally, we can observe that the strongest connection is between Sherlock and Watson. Moreover, we can see various connections among the main and minor characters of the book. However, from only looking at the graph, it is non-trivial to understand the various communities and their relationships.
Using similar methods, we can learn more on each character by finding connections among person and location and person and organization. I leave the reader to find additional insights on the various characters on their own.
"I have known him for some time," said I, "but I never knew him do anything yet without a very good reason, and with that our conversation drifted off on to other topics. -Memoirs of Sherlock Holmes
According to Wikipedia article, topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. I personally find topic models an interesting tool to explore a large text corpus. In this section, we are going to demonstrate how it is possible to utilze GraphLab's topic model toolkit with the pyLDAvis package to uncover topics in a set of documents. Namely, we will use GraphLab's topic model toolkit to analyze paragraphs in Sherlock Holmes stories.
We will start by separating each story into paragraphs.
In [ ]:
import graphlab as gl
import re
sf = gl.load_sframe("%s/books.sframe" % BASE_DIR)
sf_paragraphs = sf.flat_map(['title', 'text'], lambda t: [[t['title'],p.strip()] for p in t['text'].split("\n\n")])
sf_paragraphs = sf_paragraphs.rename({'text': 'paragraph'})
Let's calculate the number of words in each paragraph, and filter the paragraph that have less than 25 words.
In [ ]:
re_words_split = re.compile("(\w+)")
sf_paragraphs['paragraph_words_number'] = sf_paragraphs['paragraph'].apply(lambda p: len(re_words_split.findall(p)) )
sf_paragraphs = sf_paragraphs[sf_paragraphs['paragraph_words_number'] >=25]
Using the stories' paragraphs as documents, we can utilize GraphLab's topic model toolkit to discover topics that appear in these paragraph. We create a topic model with 10 topics to learn.
Note: the topic model results may be different in each run.
In [ ]:
docs = gl.text_analytics.count_ngrams(sf_paragraphs['paragraph'], n=1)
stopwords = gl.text_analytics.stopwords()
# adding some additional stopwords to make the topic model more clear
stopwords |= set(['man', 'mr', 'sir', 'make', 'made', 'll', 'door', 'long', 'day', 'small'])
docs = docs.dict_trim_by_keys(stopwords, exclude=True)
docs = docs.dropna()
topic_model = gl.topic_model.create(docs, num_topics=10)
Let's view the most common word in each topic
In [ ]:
topic_model.get_topics().print_rows(100)
topic_model.save("%s/topic_model" % BASE_DIR)
Reading the above table, we can understand some of the topics. However, it still hard to get good overall overview. Therefore, we will use the excellent pyLDAvis package, developed by Ben Mabey, to better the various topics in the books.
In [ ]:
import pyLDAvis
import pyLDAvis.graphlab
pyLDAvis.enable_notebook()
pyLDAvis.graphlab.prepare(topic_model, docs)
Out[ ]:
From the above visualization, we can observe that the algorithm returned pretty interesting results. For example, one identified topic is related to Watson, locations (room, street, house, etc.), and time (days, hours, etc.). While, another topic is related to Holmes, men, and murder. For me these are pretty interesting results. I recommend the reader to try investigate the results by themselves. Moreover, I think that running the topic model algorithm on other text corpus can help to better understand this algorithms advantages.
"I have notes of several similar cases, though none, as I remarked before, which were quite as prompt. My whole examination served to turn my conjecture into a certainty. Circumstantial evidence is occasionally very convincing, as when you find a trout in the milk, to quote Thoreau's example." -The Adventure of the Noble Bachelor
These days, no NLP related post can be complete without including the words "deep learning." Therefore, in this section I will demonstrate how to use Word2Vec deep learning inspired algorithm to search for paragraphs that have similar text or writing style.
First, let's build a Word2Vec model using Sherlock's stories. We will construct the Word2Vec model using the Gensim package and a similar method to the one presented in Word2vec Tutorial and in my previous post.
In [ ]:
import graphlab as gl
import urllib2
import gensim
import nltk
import re
txt = urllib2.urlopen("https://sherlock-holm.es/stories/plain-text/cnus.txt").read()
re_words_split = re.compile("(\w+)")
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
def txt2words(s):
s = re.sub("[^a-zA-Z]", " ", s).lower()
return re_words_split.findall(s)
class MySentences(object):
def __init__(self, txt):
self._txt = txt.decode("utf8")
def __iter__(self):
"""
Split the English text into sentences and then to words using NLTK
:param txt: input text.
:param remove_none_english_chars: if True then remove none English chars from text
:return: list of words in which each list consists of single sentence's words from the original input text.
:rtype: str
"""
# split text into sentences using NLTK package
for s in tokenizer.tokenize(self._txt):
yield txt2words(s)
sentences = MySentences(txt)
model = gensim.models.Word2Vec(sentences, size=100, window=5, min_count=3, workers=4)
We now have a trained Word2Vec model, let's see if it gives reasonable results:
In [ ]:
print model.most_similar("watson")
print model.most_similar("holmes")
We got that the most similar word to Watson is Mortimer and the most similar word to Holmes is Lestrade. These results sound logical enough. Let us calculate the average vector of each paragraph.
In [ ]:
import graphlab as gl
import re
import numpy as np
BASE_DIR = r"/Users/pablo/fromHomeMac/sherlock/data" # NOTE: Update BASE_DIR to your own directory path
sf = gl.load_sframe("%s/books.sframe" % BASE_DIR)
sf_paragraphs = sf.flat_map(['title', 'text'], lambda t: [[t['title'],p.strip()] for p in t['text'].split("\n\n")])
sf_paragraphs = sf_paragraphs.rename({'text': 'paragraph'})
sf_paragraphs['paragraph_words_number'] = sf_paragraphs['paragraph'].apply(lambda p: len(re_words_split.findall(p)) )
sf_paragraphs = sf_paragraphs[sf_paragraphs['paragraph_words_number'] >=25]
def txt2avg_vector(txt, w2v_model):
words = [w for w in txt2words(txt.lower()) if w in w2v_model]
v = np.mean([w2v_model[w] for w in words],axis=0)
return v
sf_paragraphs['mean_vector'] = sf_paragraphs['paragraph'].apply(lambda p: txt2avg_vector(p, model))
Now we have the mean vector value of each paragraph. Let's utilize GraphLab Create nearest neighbors toolkit to identify paragraphs that have similar text or writing style. We will acheive that by calaculating the nearest neighbor to each the mean vector of each paragraph.
In [ ]:
#construncting nearest neighbors model
nn_model = gl.nearest_neighbors.create(sf_paragraphs, features=['mean_vector'])
#calaculating the two nearest neighbors of each paragraph from all the paragraphs
r = nn_model.query(sf_paragraphs, k=2)
r.head(10)
Out[ ]:
Of course the nearest neighbors to each paragraph is the paragraph itself. Therefore, let us filter out paragraph that are with a distance of zero from each other. Additionally, let's look only on two near paragraphs that have small distance from each other (distance < 0.08)
In [ ]:
#filter out paragraphs that are exactly exactly the same
r = r[r['distance'] != 0]
#filter out paragraphs that are with distance >= 0.1
r = r[r['distance'] < 0.08]
r
Out[ ]:
Now, let's use join to match between each query_label and reference_label values and their actual paragraphs.
In [ ]:
sf_paragraphs = sf_paragraphs.add_row_number('query_label')
sf_paragraphs = sf_paragraphs.add_row_number('reference_label')
sf_similar = r.join(sf_paragraphs, on="query_label").join(sf_paragraphs, on="reference_label")
In [ ]:
sf_similar[['paragraph','title', 'title.1', 'paragraph.1', 'distance']]
Out[ ]:
Let's look at some of the similar paragraphs.
In [ ]:
print sf_similar[1]['paragraph']
print "-"*100
print sf_similar[1]['paragraph.1']
Although that in the paragraphs match the text is completely different, it still has the several similar motifs. In the first paragraph dog is leading his master. While in the second paragraph the boots are replacing the dog part. In both paragraphs the author use somewhat similar motifs "dreadful sight to see that huge black creature" and "saw something that made me feel sickish." I personally find these results quite interesting.
"Thank you," said Holmes, "I only wished to ask you how you would go from here to the Strand." -The Red-Headed League
In this notebook, we presented a short and practical tutorial for NLP, which covered several common NLP topics, such as NER, Topic Model, and Word2Vec. If you want to continue to explore this dataset yourself, there are a lot more that can be done. You can rerun this code using different texts (Harry Potter, Lord of the Rings, and etc.). In addition, you can try to modify the above code to create social networks between persons and locations, or to use GloVe instead of Word2Vec. Furthermore, you can also try to run other graph theory algorithms, such as community detection algorithms, on the constructed social networks to uncover additional interesting insights. We hope that the methods and code presented in this notebook can assist you to solve other text analysis tasks.
Further reading material: