In [2]:
# Import all of the things you need to import!
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
import re
from nltk.stem.porter import PorterStemmer
from sklearn.cluster import KMeans

pd.options.display.max_columns = 30
%matplotlib inline

Homework 14 (or so): TF-IDF text analysis and clustering

Hooray, we kind of figured out how text analysis works! Some of it is still magic, but at least the TF and IDF parts make a little sense. Kind of. Somewhat.

No, just kidding, we're professionals now.

Investigating the Congressional Record

The Congressional Record is more or less what happened in Congress every single day. Speeches and all that. A good large source of text data, maybe?

Let's pretend it's totally secret but we just got it leaked to us in a data dump, and we need to check it out. It was leaked from this page here.


In [3]:
# If you'd like to download it through the command line...
!curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz


  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 9607k  100 9607k    0     0  7175k      0  0:00:01  0:00:01 --:--:-- 7180k

In [4]:
# And then extract it through the command line...
!tar -zxf convote_v1.1.tar.gz

You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files.


In [5]:
# glob finds files matching a certain filename pattern
import glob

# Give me all the text files
paths = glob.glob('convote_v1.1/data_stage_one/development_set/*')
paths[:5]


Out[5]:
['convote_v1.1/data_stage_one/development_set/052_400011_0327014_DON.txt',
 'convote_v1.1/data_stage_one/development_set/052_400011_0327025_DON.txt',
 'convote_v1.1/data_stage_one/development_set/052_400011_0327044_DON.txt',
 'convote_v1.1/data_stage_one/development_set/052_400011_0327046_DON.txt',
 'convote_v1.1/data_stage_one/development_set/052_400011_1479036_DON.txt']

In [6]:
len(paths)


Out[6]:
702

So great, we have 702 of them. Now let's import them.


In [7]:
speeches = []
for path in paths:
    with open(path) as speech_file:
        speech = {
            'pathname': path,
            'filename': path.split('/')[-1],
            'content': speech_file.read()
        }
    speeches.append(speech)
speeches_df = pd.DataFrame(speeches)
speeches_df.head()


Out[7]:
content filename pathname
0 mr. chairman , i thank the gentlewoman for yie... 052_400011_0327014_DON.txt convote_v1.1/data_stage_one/development_set/05...
1 mr. chairman , i want to thank my good friend ... 052_400011_0327025_DON.txt convote_v1.1/data_stage_one/development_set/05...
2 mr. chairman , i rise to make two fundamental ... 052_400011_0327044_DON.txt convote_v1.1/data_stage_one/development_set/05...
3 mr. chairman , reclaiming my time , let me mak... 052_400011_0327046_DON.txt convote_v1.1/data_stage_one/development_set/05...
4 mr. chairman , i thank my distinguished collea... 052_400011_1479036_DON.txt convote_v1.1/data_stage_one/development_set/05...

In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff.

Take a look at the contents of the first 5 speeches


In [8]:
for item in speeches_df['content'][:5]:
    print(item[:140], "\n")


mr. chairman , i thank the gentlewoman for yielding me this time . 
my good colleague from california raised the exact and critical point .  

mr. chairman , i want to thank my good friend from california ( mr. rohrabacher ) xz4003430 . 
i will always remember that day , as we all w 

mr. chairman , i rise to make two fundamental points before we proceed to vote on this . 
the two points are these : this resolution does no 

mr. chairman , reclaiming my time , let me make two final points : one , the majority party must understand this : if you are at a republica 

mr. chairman , i thank my distinguished colleague , and i appreciate his leadership on this issue . 
the gentleman from california ( mr. roh 

Doing our analysis

Use the sklearn package and a plain boring CountVectorizer to get a list of all of the tokens used in the speeches. If it won't list them all, that's ok! Make a dataframe with those terms as columns.

Be sure to include English-language stopwords


In [9]:
count_vectorizer = CountVectorizer(stop_words='english')
X = count_vectorizer.fit_transform(speeches_df['content'])
X


Out[9]:
<702x9106 sparse matrix of type '<class 'numpy.int64'>'
	with 56106 stored elements in Compressed Sparse Row format>

In [10]:
X_df = pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names())

In [11]:
X_df.head(10)


Out[11]:
000 00007 018 050 092 10 100 106 107 108 108th 109th 10th 11 110 ... yields york yorkers young younger youngsters youth yuan zero zeroing zeros zigler zirkin zoe zoellick
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

10 rows × 9106 columns

Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words.


In [12]:
count_vectorizer = CountVectorizer(stop_words='english', max_features=100)
X = count_vectorizer.fit_transform(speeches_df['content'])
X


Out[12]:
<702x100 sparse matrix of type '<class 'numpy.int64'>'
	with 11088 stored elements in Compressed Sparse Row format>

In [13]:
pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names()).head()


Out[13]:
000 11 act allow amendment america american amp association balance based believe bipartisan chairman children ... teachers thank think time today trade united urge vote want way work year years yield
0 0 1 3 0 0 0 3 0 0 0 0 1 0 3 0 ... 0 1 3 3 2 0 1 0 0 1 1 0 0 0 1
1 0 0 1 1 1 0 0 0 0 1 0 0 0 2 0 ... 0 1 0 2 2 0 0 0 1 1 3 0 1 0 0
2 0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 ... 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1
3 0 0 0 0 0 1 0 0 0 1 0 0 0 2 0 ... 0 0 0 2 0 0 1 0 1 1 1 0 0 0 0
4 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 ... 0 1 0 1 0 0 0 0 2 0 0 0 0 0 2

5 rows × 100 columns

Now let's push all of that into a dataframe with nicely named columns.


In [14]:
X_df = pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names())

Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and many don't mention "chairman" and how many mention neither "mr" nor "chairman"?


In [15]:
no_chairman = X_df[X_df['chairman'] == 0]['chairman'].count()
no_chairman_no_mr = X_df[(X_df['chairman'] == 0) & (X_df['mr'] == 0)]['chairman'].count()
print("In a total of", len(X_df), "speeches,", no_chairman, "don't mention “chairman” and", no_chairman_no_mr, "mention neither “mr” nor “chairman”.")


In a total of 702 speeches, 250 don't mention “chairman” and 76 mention neither “mr” nor “chairman”.

What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times?


In [16]:
print("The index of this speech is", X_df['thank'].idxmax())


The index of this speech is 577

If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser?


In [17]:
china_trade = X_df.sort_values(by=['china', 'trade'], ascending=[0, 0])[['china', 'trade']].head(3)
print("These three speeches have the indexes ", *list(china_trade.index))
china_trade


These three speeches have the indexes  379 399 345
Out[17]:
china trade
379 29 63
399 27 9
345 16 11

Now what if I'm using a TfidfVectorizer?


In [18]:
def simple_tokenizer(str_input):
    words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
    return words
tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=simple_tokenizer, use_idf=False, norm='l1')
X = tfidf_vectorizer.fit_transform(speeches_df['content'])
TF_pd = pd.DataFrame(X.toarray(), columns=tfidf_vectorizer.get_feature_names())

china_trade = TF_pd.sort_values(by=['china', 'trade'], ascending=[0, 0])[['china', 'trade']].head(3)
print("The three top speeches have the indexes ", *list(china_trade.index))
china_trade


The three top speeches have the indexes  345 340 315
Out[18]:
china trade
345 0.078818 0.054187
340 0.057377 0.008197
315 0.055000 0.035000

What's the content of the speeches? Here's a way to get them:


In [19]:
# index 0 is the first speech, which was the first one imported.
paths[0]


Out[19]:
'convote_v1.1/data_stage_one/development_set/052_400011_0327014_DON.txt'

In [31]:
# Pass that into 'cat' using { } which lets you put variables in shell commands
# that way you can pass the path to cat
!cat {paths[0]}


mr. chairman , i thank the gentlewoman for yielding me this time . 
my good colleague from california raised the exact and critical point . 
the question is , what happens during those 45 days ? 
we will need to support elections . 
there is not a single member of this house who has not supported some form of general election , a special election , to replace the members at some point . 
but during that 45 days , what happens ? 
the chair of the constitution subcommittee says this is what happens : martial law . 
we do not know who would fill the vacancy of the presidency , but we do know that the succession act most likely suggests it would be an unelected person . 
the sponsors of the bill before us today insist , and i think rightfully so , on the importance of elections . 
but to then say that during a 45-day period we would have none of the checks and balances so fundamental to our constitution , none of the separation of powers , and that the presidency would be filled by an unelected member of the cabinet who not a single member of this country , not a single citizen , voted to fill that position , and that that person would have no checks and balances from congress for a period of 45 days i find extraordinary . 
i find it inconsistent . 
i find it illogical , and , frankly , i find it dangerous . 
the gentleman from wisconsin refused earlier to yield time , but i was going to ask him , if virginia has those elections in a shorter time period , they should be commended for that . 
so now we have a situation in the congress where the virginia delegation has sent their members here , but many other states do not have members here . 
do they at that point elect a speaker of the house in the absence of other members ? 
and then three more states elect their representatives , temporary replacements , or full replacements at that point . 
they come in . 
do they elect a new speaker ? 
and if that happens , who becomes the president under the succession act ? 
this bill does not address that question . 
this bill responds to real threats with fantasies . 
it responds with the fantasy , first of all , that a lot of people will still survive ; but we have no guarantee of that . 
it responds with the fantasy that those who do survive will do the right thing . 
we are here having this debate , we have debates every day , because people differ on what the right thing is to do . 
i have been in very traumatic situations with people in severe car wrecks and mountain climbing accidents . 
my experience has not been that crisis imbues universal sagacity and fairness . 
it has not been that . 
people respond in extraordinary ways , and we must preserve an institution that has the deliberative body and the checks and balances to meet those challenges . 
many of our states are going increasingly to mail-in ballots . 
we in this body were effectively disabled by an anthrax attack not long after september 11 . 
i would ask my dear friends , will you conduct this election in 45 days if there is anthrax in the mail and still preserve the franchise of the american people ? 
how will you do that ? 
you have no answer to that question . 
i find it extraordinary , frankly , that while saying you do not want to amend the constitution , we began this very congress by amending the constitution through the rule , by undermining the principle that a quorum is 50 percent of the body and instead saying it is however many people survive . 
and if that rule applies , who will designate it , who will implement it ? 
the speaker , or the speaker 's designee ? 
again , not an elected person , as you say is so critical and i believe is critical , but a temporary appointee , frankly , who not a single other member of this body knows who they are . 
so we not only have an unelected person , we have an unknown person who will convene this body , and who , by the way , could conceivably convene it for their own election to then become the president of the united states under the succession act . 
you have refused steadfastly to debate this real issue broadly . 
you had a mock debate in the committee on the judiciary in which the distinguished chairman presented my bill without allowing me the courtesy or dignity to defend it myself . 
and on that , you proudly say you defend democracy . 
sir , i think you dissemble in that regard . 
here is the fundamental question for us , my friends , and it is this : the american people are watching television and an announcement comes on and says the congress has been destroyed in a nuclear attack , the president and vice president are killed and the supreme court is dead and thousands of our citizens in this town are . 
what happens next ? 
under your bill , 45 days of chaos . 
apparently , according to the committee on the judiciary subcommittee on the constitution chairman , 45 days of marshal law , rule of this country by an unelected president with no checks and balances . 
or an alternative , an alternative which says quite simply that the people have entrusted the representatives they send here to make profound decisions , war , taxation , a host of other things , and those representatives would have the power under the bill of the gentleman from california ( mr. rohrabacher ) xz4003430 bill or mine to designate temporary successors , temporary , only until we can have a real election . 
the american people , in one scenario , are told we do not know who is going to run the country , we have no representatives ; where in another you will have temporary representatives carrying your interests to this great body while we deliberate and have real elections . 
that is the choice . 
you are making the wrong choice today if you think you have solved this problem . 

Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting.


In [35]:
numbers = list(range(0, 10))
numbers = list(map(str, numbers))
words_list = [i for i in list(TF_pd.columns) if i[0] not in numbers]
print(*words_list[5:100], sep='|') # to get some ideas


aaron|aba|abandon|abandoned|abandoning|abcs|abet|abhorrent|abide|abides|abiding|abilities|ability|able|ably|abolish|abraham|abridgement|abroad|abrogation|absence|absent|absentee|absolutely|absolve|absorb|absurd|abundance|abundant|abuse|abused|abuses|abusing|abusive|abysmal|academic|academically|academics|academy|accede|accelerated|accept|acceptable|acceptance|accepted|accepting|accepts|access|accessible|accessing|accession|accessories|accident|accidents|acclaimed|accommodate|accommodated|accommodating|accompanies|accompanying|accomplish|accomplished|accomplishes|accomplishment|accordance|according|accordingly|account|accountability|accountable|accountant|accounting|accounts|accumulated|accumulation|accurate|accurately|accusations|accused|accustom|achieve|achieved|achievement|achievements|achieving|acknowledge|acknowledged|acknowledges|aclu|acquainted|acquire|acquired|acquisition|acquisitions|acre

In [22]:
chaos = TF_pd.sort_values(by=['awfully', 'bacterial'], ascending=[0, 0])[['awfully', 'bacterial']].head(3)
print("The three top speeches have the indexes ", *list(chaos.index))
chaos


The three top speeches have the indexes  392 204 0
Out[22]:
awfully bacterial
392 0.004132 0.000000
204 0.000000 0.009174
0 0.000000 0.000000

In [23]:
gun_bomb = TF_pd.sort_values(by=['gun', 'bomb'], ascending=[0, 0])[['gun', 'bomb']].head(3)
print("The three top speeches have the indexes ", *list(gun_bomb.index))
gun_bomb


The three top speeches have the indexes  644 661 133
Out[23]:
gun bomb
644 0.001876 0.000
661 0.000553 0.000
133 0.000000 0.004

Enough of this garbage, let's cluster

Using a simple counting vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.

Using a term frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.

Using a term frequency inverse document frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.

CountVectorizer(): Convert a collection of text documents to a matrix of token counts

TfidfVectorizer(use_idf=False): Convert a collection of raw documents to a matrix of TF-IDF features. Equivalent to CountVectorizer followed by TfidfTransformer.

TfidfVectorizer(use_idf=True) (default): Enable inverse-document-frequency reweighting.


In [24]:
countingVectorizer = CountVectorizer(tokenizer=simple_tokenizer, stop_words='english')
TF_Vectorizer = TfidfVectorizer(use_idf=False, tokenizer=simple_tokenizer, stop_words='english')
TF_IDF_Vectorizer = TfidfVectorizer(use_idf=True, tokenizer=simple_tokenizer, stop_words='english')
Vectorizer_list = [countingVectorizer, TF_Vectorizer, TF_IDF_Vectorizer]
Vectorizer_names = ['', 'simple counting vectorizer', 'term frequency vectorizer', 'term frequency IDF vectorizer']

In [25]:
count = 1
for vectorizer in Vectorizer_list:
    print("\n[" + str(count) + "]", Vectorizer_names[count])

    X = vectorizer.fit_transform(speeches_df['content'])
    number_of_clusters = 8
    km = KMeans(n_clusters=number_of_clusters)
    km.fit(X)
    
    order_centroids = km.cluster_centers_.argsort()[:, ::-1]
    terms = vectorizer.get_feature_names()
    for i in range(number_of_clusters):
        top_five_words = [terms[ind] for ind in order_centroids[i, :10]]
        print("Cluster {}: {}".format(i, ' '.join(top_five_words)))
        
    count += 1


[1] simple counting vectorizer
Cluster 0: mr s time house trade people amendment chairman states china
Cluster 1: head start religious rights civil program discrimination protections amendment programs
Cluster 2: nbsp amp gt p lt trade -- s united states
Cluster 3: mr chairman gentleman time amendment yield speaker s committee head
Cluster 4: rule 11 rules federal h r 420 sanctions judicial litigation
Cluster 5: association national restaurant contractors chamber amp electrical commerce chapter american
Cluster 6: start head children program amendment mr programs school s chairman
Cluster 7: church s financial embezzlement -- churches says checks 000 funds

[2] term frequency vectorizer
Cluster 0: start head children program amendment mr chairman programs s religious
Cluster 1: mr chairman amendment gentleman time s house people committee congress
Cluster 2: chairman mr time balance yield amendment reserve vote gentleman demand
Cluster 3: china trade s speaker mr legislation american vote u time
Cluster 4: speaker mr time yield gentleman balance committee vote reserve demand
Cluster 5: yield gentleman texas wisconsin illinois gentlewoman ohio michigan california north
Cluster 6: frivolous lawsuits rule state legislation mr court s federal 11
Cluster 7: mr yield gentleman chairman minutes 2 speaker 1 member minute

[3] term frequency IDF vectorizer
Cluster 0: election elections house days time people special states amendment mr
Cluster 1: yield gentleman mr chairman texas illinois wisconsin speaker michigan ohio
Cluster 2: demand recorded vote mr speaker chairman yeas nays pending quorum
Cluster 3: china trade s currency speaker cafta chinese u wto jobs
Cluster 4: mr yield minutes gentleman chairman 2 speaker 1 gentlewoman minute
Cluster 5: mr amendment chairman time speaker gentleman s offer committee ask
Cluster 6: start head children program amendment religious programs school discrimination services
Cluster 7: balance time chairman mr yield reserve speaker continue reclaiming consume

Which one do you think works the best?

The TF-IDF is definitely the most efficient one, at least in this case!

Harry Potter time

I have a scraped collection of Harry Potter fanfiction at https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip.

I want you to read them in, vectorize them and cluster them. Use this process to find out the two types of Harry Potter fanfiction. What is your hypothesis?

curl -LO


In [26]:
!curl -LO https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip


  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   149  100   149    0     0    119      0  0:00:01  0:00:01 --:--:--   119
100 9226k  100 9226k    0     0  1809k      0  0:00:05  0:00:05 --:--:-- 3014k

In [27]:
#!unzip hp.zip
paths_potter = glob.glob('hp/*')
paths_potter[:3]


Out[27]:
['hp/10001898.txt', 'hp/10004131.txt', 'hp/10004927.txt']

In [28]:
potter_texts = []
for path in paths_potter:
    with open(path) as speech_file:
        text = {
            'pathname': path,
            'filename': path.split('/')[-1],
            'content': speech_file.read()
        }
    potter_texts.append(text)
potter_df = pd.DataFrame(potter_texts)
potter_df.head(2)


Out[28]:
content filename pathname
0 Prologue: The MissionDisclaimer: All character... 10001898.txt hp/10001898.txt
1 BlackDisclaimer: I do not own Harry PotterAuth... 10004131.txt hp/10004131.txt

In [29]:
#1
vectorizer = TfidfVectorizer(use_idf=True, tokenizer=simple_tokenizer, stop_words='english')
X = vectorizer.fit_transform(potter_df['content'])

#2
number_of_clusters = 2
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)

#3
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
    top_ten_words = [terms[ind] for ind in order_centroids[i, :5]]
    print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))

#4
results = pd.DataFrame()
results['text'] = potter_df['content']
results['category'] = km.labels_
results.head(10)


Cluster 0: t s lily james sirius
Cluster 1: harry hermione t s draco
Out[29]:
text category
0 Prologue: The MissionDisclaimer: All character... 0
1 BlackDisclaimer: I do not own Harry PotterAuth... 0
2 Chapter 1"I'm pregnant.""""Mum please say some... 1
3 Author's Note: Hey, just so you know, this is ... 0
4 Disclaimer: I do not own Harry Potter and frie... 0
5 Disclaimer: I don't own any character in the H... 0
6 DISCLAIMER: I don't own Harry Potter and its c... 1
7 Katherine Rose-TylerChapter One: the Introduct... 0
8 I am no longer that shy little boy anymore.I w... 1
9 Happy New year! *throws confetti*I've really b... 0

The two types of Harry Potter fanfiction

  • story of Potter's parents
  • story of Harry, Hermione and Draco