Examples of text analysis


In [1]:
import nltk

In [2]:
abstract = """
It's morning, you settle in, check your dashboards and it looks like there is an increase of load coming through on some of your web server logs. What happened? You're about to deploy code that will hopefully fix some issues; how will you know that things worked well? The design team is thinking about changing some of the site icons; do your users like seeing big icons or small icons on your site? These are all scenarios that are all too common and the one thing that helps you answer these is your data.

Pushing data is typically easy. If you're tracking tracking events on a website, you'll probably want to know a lot about click tracking, URL referrals, and user sessions. If you're curious about the number of downloads your users go through per day, you'll probably have some data that you can aggregate a sum. Your data can be small or large or anything in between, but making it available is the most important piece that you'll need to have.

Pulling data can be a bit more complex. Do you have a small amount of data that you're just pulling from a relational database? Or are you processing data through Hadoop or Spark? Data is what you want; how you pull it is dependent on your architecture needs.

Presenting data is a simple task, but are you presenting the correct story? Whether you are presenting your web traffic or your user behavior data, you'll want to present your data that tells the story you want to tell in the best way.

Push data, pull data, present data; these are your main tasks in your typical cycle of product development and analysis. We built out a fairly quick data pipeline using Airflow, a workflow framework made by Airbnb. We push a lot of data so we can make good data-driven business decisions. Pulling data and presenting them have gone hand-in-hand for us. We have utilized Google's BigQuery in order for us to have a fast, columnar data store in order for us to build out dashboards to visualize our data. This will shed light into what a typical push-pull-present cycle looks like and will be exemplified with real-world examples."""

First tokenize the text and then we tag the parts of speech


In [3]:
tokens = nltk.word_tokenize(abstract)

In [4]:
tokens[:10]


Out[4]:
['It', "'s", 'morning', ',', 'you', 'settle', 'in', ',', 'check', 'your']

In [5]:
tagged = nltk.pos_tag(tokens)

In [6]:
tagged[:10]


Out[6]:
[('It', 'PRP'),
 ("'s", 'VBZ'),
 ('morning', 'NN'),
 (',', ','),
 ('you', 'PRP'),
 ('settle', 'VBP'),
 ('in', 'IN'),
 (',', ','),
 ('check', 'VB'),
 ('your', 'PRP$')]

Let's get a frequency distribution of the parts of speech tags


In [7]:
tag_fd = nltk.FreqDist(tag for (word, tag) in tagged)
tag_fd.most_common()


Out[7]:
[('NN', 51),
 ('IN', 44),
 ('NNS', 35),
 ('DT', 31),
 ('PRP', 31),
 ('JJ', 26),
 ('VB', 24),
 ('.', 21),
 ('VBP', 19),
 ('VBG', 17),
 ('PRP$', 14),
 ('CC', 14),
 ('VBZ', 13),
 (',', 13),
 ('MD', 12),
 ('NNP', 9),
 ('TO', 8),
 ('RB', 7),
 ('VBN', 5),
 ('WDT', 4),
 (':', 4),
 ('WP', 3),
 ('VBD', 2),
 ('RP', 2),
 ('WRB', 2),
 ('POS', 1),
 ('RBS', 1),
 ('RBR', 1),
 ('CD', 1),
 ('EX', 1),
 ('JJS', 1),
 ('JJR', 1)]

Let's look at the part of speech tags for reference


In [8]:
nltk.help.upenn_tagset()


$: dollar
    $ -$ --$ A$ C$ HK$ M$ NZ$ S$ U.S.$ US$
'': closing quotation mark
    ' ''
(: opening parenthesis
    ( [ {
): closing parenthesis
    ) ] }
,: comma
    ,
--: dash
    --
.: sentence terminator
    . ! ?
:: colon or ellipsis
    : ; ...
CC: conjunction, coordinating
    & 'n and both but either et for less minus neither nor or plus so
    therefore times v. versus vs. whether yet
CD: numeral, cardinal
    mid-1890 nine-thirty forty-two one-tenth ten million 0.5 one forty-
    seven 1987 twenty '79 zero two 78-degrees eighty-four IX '60s .025
    fifteen 271,124 dozen quintillion DM2,000 ...
DT: determiner
    all an another any both del each either every half la many much nary
    neither no some such that the them these this those
EX: existential there
    there
FW: foreign word
    gemeinschaft hund ich jeux habeas Haementeria Herr K'ang-si vous
    lutihaw alai je jour objets salutaris fille quibusdam pas trop Monte
    terram fiche oui corporis ...
IN: preposition or conjunction, subordinating
    astride among uppon whether out inside pro despite on by throughout
    below within for towards near behind atop around if like until below
    next into if beside ...
JJ: adjective or numeral, ordinal
    third ill-mannered pre-war regrettable oiled calamitous first separable
    ectoplasmic battery-powered participatory fourth still-to-be-named
    multilingual multi-disciplinary ...
JJR: adjective, comparative
    bleaker braver breezier briefer brighter brisker broader bumper busier
    calmer cheaper choosier cleaner clearer closer colder commoner costlier
    cozier creamier crunchier cuter ...
JJS: adjective, superlative
    calmest cheapest choicest classiest cleanest clearest closest commonest
    corniest costliest crassest creepiest crudest cutest darkest deadliest
    dearest deepest densest dinkiest ...
LS: list item marker
    A A. B B. C C. D E F First G H I J K One SP-44001 SP-44002 SP-44005
    SP-44007 Second Third Three Two * a b c d first five four one six three
    two
MD: modal auxiliary
    can cannot could couldn't dare may might must need ought shall should
    shouldn't will would
NN: noun, common, singular or mass
    common-carrier cabbage knuckle-duster Casino afghan shed thermostat
    investment slide humour falloff slick wind hyena override subhumanity
    machinist ...
NNP: noun, proper, singular
    Motown Venneboerger Czestochwa Ranzer Conchita Trumplane Christos
    Oceanside Escobar Kreisler Sawyer Cougar Yvette Ervin ODI Darryl CTCA
    Shannon A.K.C. Meltex Liverpool ...
NNPS: noun, proper, plural
    Americans Americas Amharas Amityvilles Amusements Anarcho-Syndicalists
    Andalusians Andes Andruses Angels Animals Anthony Antilles Antiques
    Apache Apaches Apocrypha ...
NNS: noun, common, plural
    undergraduates scotches bric-a-brac products bodyguards facets coasts
    divestitures storehouses designs clubs fragrances averages
    subjectivists apprehensions muses factory-jobs ...
PDT: pre-determiner
    all both half many quite such sure this
POS: genitive marker
    ' 's
PRP: pronoun, personal
    hers herself him himself hisself it itself me myself one oneself ours
    ourselves ownself self she thee theirs them themselves they thou thy us
PRP$: pronoun, possessive
    her his mine my our ours their thy your
RB: adverb
    occasionally unabatingly maddeningly adventurously professedly
    stirringly prominently technologically magisterially predominately
    swiftly fiscally pitilessly ...
RBR: adverb, comparative
    further gloomier grander graver greater grimmer harder harsher
    healthier heavier higher however larger later leaner lengthier less-
    perfectly lesser lonelier longer louder lower more ...
RBS: adverb, superlative
    best biggest bluntest earliest farthest first furthest hardest
    heartiest highest largest least less most nearest second tightest worst
RP: particle
    aboard about across along apart around aside at away back before behind
    by crop down ever fast for forth from go high i.e. in into just later
    low more off on open out over per pie raising start teeth that through
    under unto up up-pp upon whole with you
SYM: symbol
    % & ' '' ''. ) ). * + ,. < = > @ A[fj] U.S U.S.S.R * ** ***
TO: "to" as preposition or infinitive marker
    to
UH: interjection
    Goodbye Goody Gosh Wow Jeepers Jee-sus Hubba Hey Kee-reist Oops amen
    huh howdy uh dammit whammo shucks heck anyways whodunnit honey golly
    man baby diddle hush sonuvabitch ...
VB: verb, base form
    ask assemble assess assign assume atone attention avoid bake balkanize
    bank begin behold believe bend benefit bevel beware bless boil bomb
    boost brace break bring broil brush build ...
VBD: verb, past tense
    dipped pleaded swiped regummed soaked tidied convened halted registered
    cushioned exacted snubbed strode aimed adopted belied figgered
    speculated wore appreciated contemplated ...
VBG: verb, present participle or gerund
    telegraphing stirring focusing angering judging stalling lactating
    hankerin' alleging veering capping approaching traveling besieging
    encrypting interrupting erasing wincing ...
VBN: verb, past participle
    multihulled dilapidated aerosolized chaired languished panelized used
    experimented flourished imitated reunifed factored condensed sheared
    unsettled primed dubbed desired ...
VBP: verb, present tense, not 3rd person singular
    predominate wrap resort sue twist spill cure lengthen brush terminate
    appear tend stray glisten obtain comprise detest tease attract
    emphasize mold postpone sever return wag ...
VBZ: verb, present tense, 3rd person singular
    bases reconstructs marks mixes displeases seals carps weaves snatches
    slumps stretches authorizes smolders pictures emerges stockpiles
    seduces fizzes uses bolsters slaps speaks pleads ...
WDT: WH-determiner
    that what whatever which whichever
WP: WH-pronoun
    that what whatever whatsoever which who whom whosoever
WP$: WH-pronoun, possessive
    whose
WRB: Wh-adverb
    how however whence whenever where whereby whereever wherein whereof why
``: opening quotation mark
    ` ``

Now, let's display some graphs just to visualize this frequency distribution


In [9]:
%matplotlib inline
import matplotlib.pyplot as plt

In [10]:
most_common_pos = tag_fd.most_common()
plt.figure(figsize=(15, 10))
plt.bar([x for x in range(len(most_common_pos))],
        [count for (pos, count) in most_common_pos],
        tick_label=['' for (pos, count) in most_common_pos]
    )
plt.show()



In [11]:
most_common_pos = tag_fd.most_common()
plt.figure(figsize=(15, 10))
plt.bar([x for x in range(len(most_common_pos))],
        [count for (pos, count) in most_common_pos],
        tick_label=[pos for (pos, count) in most_common_pos]
    )
plt.xticks(rotation=45)
plt.show()



In [12]:
most_common_pos = tag_fd.most_common()
plt.figure(figsize=(10, 10))
plt.pie([count for (pos, count) in most_common_pos], shadow=True)
plt.show()



In [13]:
most_common_pos = tag_fd.most_common()
plt.figure(figsize=(10, 10))
plt.pie([count for (pos, count) in most_common_pos],
        labels=[pos for (pos, count) in most_common_pos],
        autopct='%.2f%%',
        shadow=True
    )
plt.show()


Now, we're going to start looking into the most common words of the top three part of speech tags


In [14]:
from collections import Counter

NN_tags = Counter([word.lower() for (word, pos) in tagged if pos=='NN'])
NN_tags.most_common()


Out[14]:
[('web', 2),
 ('story', 2),
 ('lot', 2),
 ('site', 2),
 ('data', 2),
 ('cycle', 2),
 ('order', 2),
 ('load', 1),
 ('development', 1),
 ('code', 1),
 ('number', 1),
 ('hand-in-hand', 1),
 ('design', 1),
 ('morning', 1),
 ('click', 1),
 ('sum', 1),
 ('fast', 1),
 ('increase', 1),
 ('columnar', 1),
 ('store', 1),
 ('website', 1),
 ('product', 1),
 ('way', 1),
 ('business', 1),
 ('workflow', 1),
 ('framework', 1),
 ('task', 1),
 ('traffic', 1),
 ('bit', 1),
 ('day', 1),
 ('pull', 1),
 ('pipeline', 1),
 ('tracking', 1),
 ('anything', 1),
 ('database', 1),
 ('light', 1),
 ('analysis', 1),
 ('server', 1),
 ('thing', 1),
 ('amount', 1),
 ('architecture', 1),
 ('behavior', 1),
 ('team', 1),
 ('piece', 1)]

In [15]:
IN_tags = Counter([word.lower() for (word, pos) in tagged if pos=='IN'])
IN_tags.most_common()


Out[15]:
[('of', 7),
 ('in', 6),
 ('on', 4),
 ('about', 4),
 ('that', 4),
 ('like', 3),
 ('for', 3),
 ('through', 3),
 ('if', 2),
 ('from', 1),
 ('whether', 1),
 ('into', 1),
 ('between', 1),
 ('per', 1),
 ('so', 1),
 ('with', 1),
 ('by', 1)]

In [16]:
NNS_tags = Counter([word.lower() for (word, pos) in tagged if pos=='NNS'])
NNS_tags.most_common()


Out[16]:
[('data', 16),
 ('icons', 3),
 ('users', 2),
 ('dashboards', 2),
 ('needs', 1),
 ('referrals', 1),
 ('things', 1),
 ('downloads', 1),
 ('sessions', 1),
 ('decisions', 1),
 ('scenarios', 1),
 ('tasks', 1),
 ('examples', 1),
 ('events', 1),
 ('issues', 1),
 ('logs', 1)]

In [17]:
verb_tags = Counter([word.lower() for (word, pos) in tagged if pos in {'VB', 'VBG',}])
verb_tags.most_common()


Out[17]:
[('have', 4),
 ('presenting', 4),
 ('be', 3),
 ('pulling', 3),
 ('want', 2),
 ('know', 2),
 ('tracking', 2),
 ('seeing', 1),
 ('shed', 1),
 ('using', 1),
 ('need', 1),
 ('check', 1),
 ('visualize', 1),
 ('thinking', 1),
 ('fix', 1),
 ('build', 1),
 ('hopefully', 1),
 ('tell', 1),
 ('do', 1),
 ('processing', 1),
 ('deploy', 1),
 ('pushing', 1),
 ('coming', 1),
 ('aggregate', 1),
 ('changing', 1),
 ('present', 1),
 ('making', 1),
 ('make', 1)]

Finally, we'll just take the product of the five most common words of the VB, VBZ and NNS tags


In [21]:
from itertools import product

['{} {}'.format(verb[0], noun[0]) for (verb, noun) in product(verb_tags.most_common()[:5], NNS_tags.most_common()[:5])]


Out[21]:
['have data',
 'have icons',
 'have users',
 'have dashboards',
 'have needs',
 'presenting data',
 'presenting icons',
 'presenting users',
 'presenting dashboards',
 'presenting needs',
 'be data',
 'be icons',
 'be users',
 'be dashboards',
 'be needs',
 'pulling data',
 'pulling icons',
 'pulling users',
 'pulling dashboards',
 'pulling needs',
 'want data',
 'want icons',
 'want users',
 'want dashboards',
 'want needs']