In [2]:
# Import all of the things you need to import!
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
import re
from nltk.stem.porter import PorterStemmer

pd.options.display.max_columns = 30
%matplotlib inline

Homework 14 (or so): TF-IDF text analysis and clustering

Hooray, we kind of figured out how text analysis works! Some of it is still magic, but at least the TF and IDF parts make a little sense. Kind of. Somewhat.

No, just kidding, we're professionals now.

Investigating the Congressional Record

The Congressional Record is more or less what happened in Congress every single day. Speeches and all that. A good large source of text data, maybe?

Let's pretend it's totally secret but we just got it leaked to us in a data dump, and we need to check it out. It was leaked from this page here.


In [3]:
# If you'd like to download it through the command line...
!curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz


  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 9607k  100 9607k    0     0  2273k      0  0:00:04  0:00:04 --:--:-- 2274k

In [4]:
# And then extract it through the command line...
!tar -zxf convote_v1.1.tar.gz

You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files.


In [5]:
# glob finds files matching a certain filename pattern
import glob

# Give me all the text files
paths = glob.glob('convote_v1.1/data_stage_one/development_set/*')
paths[:5]


Out[5]:
['convote_v1.1/data_stage_one/development_set/052_400011_0327014_DON.txt',
 'convote_v1.1/data_stage_one/development_set/052_400011_0327025_DON.txt',
 'convote_v1.1/data_stage_one/development_set/052_400011_0327044_DON.txt',
 'convote_v1.1/data_stage_one/development_set/052_400011_0327046_DON.txt',
 'convote_v1.1/data_stage_one/development_set/052_400011_1479036_DON.txt']

In [6]:
len(paths)


Out[6]:
702

So great, we have 702 of them. Now let's import them.


In [7]:
speeches = []
for path in paths:
    with open(path) as speech_file:
        speech = {
            'pathname': path,
            'filename': path.split('/')[-1],
            'content': speech_file.read()
        }
    speeches.append(speech)
speeches_df = pd.DataFrame(speeches)
speeches_df.head()


Out[7]:
content filename pathname
0 mr. chairman , i thank the gentlewoman for yie... 052_400011_0327014_DON.txt convote_v1.1/data_stage_one/development_set/05...
1 mr. chairman , i want to thank my good friend ... 052_400011_0327025_DON.txt convote_v1.1/data_stage_one/development_set/05...
2 mr. chairman , i rise to make two fundamental ... 052_400011_0327044_DON.txt convote_v1.1/data_stage_one/development_set/05...
3 mr. chairman , reclaiming my time , let me mak... 052_400011_0327046_DON.txt convote_v1.1/data_stage_one/development_set/05...
4 mr. chairman , i thank my distinguished collea... 052_400011_1479036_DON.txt convote_v1.1/data_stage_one/development_set/05...

In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff.

Take a look at the contents of the first 5 speeches


In [13]:
first_5 = speeches_df['content'].head(5)
first_5


Out[13]:
0    mr. chairman , i thank the gentlewoman for yie...
1    mr. chairman , i want to thank my good friend ...
2    mr. chairman , i rise to make two fundamental ...
3    mr. chairman , reclaiming my time , let me mak...
4    mr. chairman , i thank my distinguished collea...
Name: content, dtype: object

Doing our analysis

Use the sklearn package and a plain boring CountVectorizer to get a list of all of the tokens used in the speeches. If it won't list them all, that's ok! Make a dataframe with those terms as columns.

Be sure to include English-language stopwords


In [14]:
countvectorizer = CountVectorizer(stop_words='english')

In [16]:
tokens = countvectorizer.fit_transform(speeches_df['content'])

In [17]:
countvectorizer.get_feature_names()


Out[17]:
['000',
 '00007',
 '018',
 '050',
 '092',
 '10',
 '100',
 '106',
 '107',
 '108',
 '108th',
 '109th',
 '10th',
 '11',
 '110',
 '114',
 '117',
 '118',
 '11th',
 '12',
 '120',
 '121',
 '122',
 '123',
 '125',
 '128',
 '12898',
 '13',
 '13279',
 '1332',
 '1335',
 '1344',
 '135',
 '138',
 '14',
 '140',
 '143',
 '144',
 '145',
 '149',
 '1498',
 '14th',
 '15',
 '150',
 '1520',
 '153',
 '155',
 '159',
 '16',
 '160',
 '162',
 '163',
 '165',
 '1671',
 '1675',
 '17',
 '170',
 '1700',
 '174',
 '178',
 '1787',
 '17th',
 '18',
 '180',
 '1800',
 '1800s',
 '181',
 '1812',
 '1855',
 '186',
 '1868',
 '18th',
 '19',
 '190',
 '1907',
 '1922',
 '1927',
 '1930',
 '1940s',
 '1950s',
 '196',
 '1960',
 '1960s',
 '1964',
 '1965',
 '1967',
 '1970s',
 '1971',
 '1972',
 '1973',
 '1974',
 '1976',
 '1979',
 '198',
 '1980s',
 '1981',
 '1982',
 '1983',
 '1984',
 '1985',
 '1986',
 '1987',
 '1988',
 '1989',
 '1990',
 '1990s',
 '1991',
 '1992',
 '1993',
 '1994',
 '1995',
 '1996',
 '1997',
 '1998',
 '1999',
 '19th',
 '1st',
 '20',
 '200',
 '2000',
 '2001',
 '2002',
 '2003',
 '2004',
 '2005',
 '2006',
 '2007',
 '2008',
 '2011',
 '2016',
 '202',
 '2072',
 '20th',
 '21',
 '2123',
 '2132',
 '214',
 '216',
 '21st',
 '22',
 '220',
 '2210',
 '2217',
 '222',
 '223',
 '225',
 '226',
 '229',
 '23',
 '231',
 '2324',
 '234',
 '2361',
 '23rd',
 '24',
 '240',
 '241',
 '2411',
 '242',
 '2451',
 '248',
 '25',
 '250',
 '2586',
 '26',
 '261',
 '263',
 '2646',
 '26th',
 '27',
 '270',
 '273',
 '275',
 '278',
 '279',
 '28',
 '283',
 '2844',
 '286',
 '287',
 '2882',
 '2884',
 '2888',
 '29',
 '2904',
 '2926',
 '293',
 '2934',
 '2944',
 '297',
 '2975',
 '2985',
 '2d',
 '2nd',
 '30',
 '300',
 '3000',
 '3004',
 '3005',
 '3006',
 '301',
 '302',
 '303',
 '304',
 '305',
 '306',
 '3061',
 '309',
 '3090',
 '30s',
 '31',
 '310',
 '311',
 '3130',
 '3160',
 '3162',
 '317',
 '32',
 '3238',
 '327',
 '3283',
 '329',
 '33',
 '3306',
 '332',
 '336',
 '34',
 '340',
 '345',
 '35',
 '350',
 '352',
 '353',
 '36',
 '365',
 '37',
 '37th',
 '38',
 '383',
 '387',
 '388',
 '39',
 '397',
 '40',
 '400',
 '40th',
 '41',
 '413',
 '42',
 '420',
 '421',
 '427',
 '43',
 '435',
 '439',
 '44',
 '440',
 '442',
 '45',
 '450',
 '454',
 '455',
 '457',
 '4571',
 '461',
 '465',
 '469',
 '47',
 '479',
 '48',
 '482',
 '483',
 '487',
 '488',
 '49',
 '492',
 '4th',
 '50',
 '500',
 '501',
 '502',
 '5064',
 '508',
 '51',
 '5135',
 '52',
 '521',
 '525',
 '526',
 '53',
 '5304',
 '5305',
 '5306',
 '533',
 '53857',
 '539',
 '54',
 '543',
 '544',
 '55',
 '554',
 '562',
 '564',
 '57',
 '574',
 '58',
 '587',
 '589',
 '59',
 '5th',
 '60',
 '600',
 '604',
 '605',
 '6070',
 '609',
 '612',
 '62',
 '63',
 '6370',
 '639',
 '64',
 '641',
 '65',
 '650',
 '653',
 '66',
 '67',
 '670',
 '672',
 '675',
 '68',
 '69',
 '692',
 '698',
 '70',
 '700',
 '701',
 '702',
 '719',
 '72',
 '724',
 '74',
 '743',
 '75',
 '750',
 '751',
 '754',
 '778',
 '79',
 '80',
 '800',
 '82',
 '822',
 '83',
 '830',
 '831',
 '84',
 '8400',
 '841',
 '845',
 '8494',
 '85',
 '850',
 '865',
 '868',
 '87',
 '870',
 '90',
 '900',
 '91',
 '912',
 '924',
 '92nd',
 '93',
 '94',
 '9500',
 '96',
 '97',
 '970',
 '975',
 '97th',
 '98',
 '9849',
 '99',
 '994',
 '9th',
 '__',
 'aaron',
 'aba',
 'abandon',
 'abandoned',
 'abandoning',
 'abcs',
 'abet',
 'abhorrent',
 'abide',
 'abides',
 'abiding',
 'abilities',
 'ability',
 'able',
 'ably',
 'abolish',
 'abraham',
 'abridgement',
 'abroad',
 'abrogation',
 'absence',
 'absent',
 'absentee',
 'absolutely',
 'absolve',
 'absorb',
 'absurd',
 'abundance',
 'abundant',
 'abuse',
 'abused',
 'abuses',
 'abusing',
 'abusive',
 'abysmal',
 'academic',
 'academically',
 'academics',
 'academy',
 'accede',
 'accelerated',
 'accept',
 'acceptable',
 'acceptance',
 'accepted',
 'accepting',
 'accepts',
 'access',
 'accessible',
 'accessing',
 'accession',
 'accessioning',
 'accessories',
 'accident',
 'accidents',
 'acclaimed',
 'accommodate',
 'accommodated',
 'accommodating',
 'accompanies',
 'accompanying',
 'accomplish',
 'accomplished',
 'accomplishes',
 'accomplishment',
 'accordance',
 'according',
 'accordingly',
 'account',
 'accountability',
 'accountable',
 'accountant',
 'accounting',
 'accounts',
 'accumulated',
 'accumulation',
 'accurate',
 'accurately',
 'accusations',
 'accused',
 'accustom',
 'achieve',
 'achieved',
 'achievement',
 'achievements',
 'achieving',
 'acknowledge',
 'acknowledged',
 'acknowledges',
 'aclu',
 'acquainted',
 'acquire',
 'acquired',
 'acquisition',
 'acquisitions',
 'acre',
 'acres',
 'acronym',
 'act',
 'acted',
 'acting',
 'action',
 'actionable',
 'actions',
 'activate',
 'active',
 'actively',
 'activities',
 'activity',
 'actor',
 'actors',
 'acts',
 'actual',
 'actually',
 'ada',
 'adamantly',
 'adams',
 'adc',
 'add',
 'added',
 'addiction',
 'adding',
 'addition',
 'additional',
 'additionally',
 'additions',
 'address',
 'addressed',
 'addresses',
 'addressing',
 'adds',
 'adequate',
 'adequately',
 'adhere',
 'adherents',
 'adhering',
 'adjacent',
 'adjourn',
 'adjournment',
 'adjudicated',
 'adjust',
 'adjusted',
 'adjustment',
 'adjustments',
 'administer',
 'administered',
 'administering',
 'administration',
 'administrations',
 'administrative',
 'administrator',
 'administrators',
 'admirable',
 'admire',
 'admission',
 'admit',
 'admitted',
 'admittedly',
 'admitting',
 'adolescence',
 'adopt',
 'adopted',
 'adopting',
 'adoption',
 'adoptions',
 'ads',
 'adult',
 'adults',
 'advance',
 'advanced',
 'advancement',
 'advancements',
 'advances',
 'advancing',
 'advantage',
 'advantaged',
 'advantages',
 'adventure',
 'adversary',
 'adverse',
 'adversely',
 'advertised',
 'advice',
 'advise',
 'advised',
 'advisor',
 'advisories',
 'advisory',
 'advocacy',
 'advocate',
 'advocated',
 'advocates',
 'aesthetic',
 'affairs',
 'affect',
 'affected',
 'affecting',
 'affects',
 'affiliated',
 'affiliation',
 'affirm',
 'affirmative',
 'affirmatively',
 'affirmed',
 'affirms',
 'affluent',
 'afford',
 'affordable',
 'afforded',
 'affording',
 'affront',
 'afghanistan',
 'afl',
 'aforementioned',
 'afraid',
 'africa',
 'african',
 'afscme',
 'aftermarket',
 'aftermath',
 'afternoon',
 'age',
 'aged',
 'agencies',
 'agency',
 'agenda',
 'agendas',
 'agents',
 'ages',
 'aggressively',
 'aggrieved',
 'ago',
 'agony',
 'agree',
 'agreed',
 'agreeing',
 'agreement',
 'agreements',
 'agrees',
 'agricultural',
 'agriculture',
 'aha',
 'ahead',
 'ahs',
 'aid',
 'aide',
 'aided',
 'aiding',
 'aim',
 'aimed',
 'aims',
 'air',
 'airing',
 'airline',
 'airplanes',
 'aisle',
 'ak',
 'akin',
 'akron',
 'al',
 'alabama',
 'alan',
 'alarm',
 'alarming',
 'alaska',
 'alaskan',
 'albany',
 'alcee',
 'aldebron',
 'alerted',
 'alexander',
 'alexandria',
 'alfred',
 'alice',
 'aliens',
 'align',
 'aligned',
 'aligns',
 'alike',
 'alive',
 'allegations',
 'allege',
 'alleged',
 'allegedly',
 'allegiance',
 'alleging',
 'alleviate',
 'alliance',
 'allied',
 'allocate',
 'allocation',
 'allocations',
 'allotment',
 'allotted',
 'allow',
 'allowable',
 'allowed',
 'allowing',
 'allows',
 'alluded',
 'almonds',
 'alphabet',
 'altamonte',
 'alter',
 'altered',
 'alternate',
 'alternates',
 'alternative',
 'alternatives',
 'alto',
 'amaze',
 'amazing',
 'ambassador',
 'ambulances',
 'ameliorate',
 'amend',
 'amendable',
 'amended',
 'amending',
 'amendment',
 'amendments',
 'america',
 'american',
 'americans',
 'amos',
 'amounting',
 'amounts',
 'amp',
 'ample',
 'amt',
 'analysis',
 'analyst',
 'analyze',
 'anathema',
 'anderson',
 'andrea',
 'andrews',
 'anecdotes',
 'angela',
 'angeles',
 'angry',
 'angst',
 'anguish',
 'animal',
 'animals',
 'animated',
 'ann',
 'anna',
 'annie',
 'annihilation',
 'anniston',
 'announce',
 'announced',
 'announcement',
 'annual',
 'annually',
 'anonymous',
 'ansje',
 'answer',
 'answered',
 'answers',
 'antagonizing',
 'antelope',
 'anthony',
 'anthrax',
 'anti',
 'anticipate',
 'anticipated',
 'anticipates',
 'antidumping',
 'antietam',
 'antiforum',
 'antimiscegenation',
 'antipathy',
 'antiquated',
 'antonio',
 'anxiety',
 'anxious',
 'anybody',
 'anymore',
 'anyplace',
 'anytime',
 'aoc',
 'apa',
 'apart',
 'apathy',
 'apologize',
 'apostle',
 'apparel',
 'apparent',
 'apparently',
 'appeal',
 'appealed',
 'appeals',
 'appear',
 'appeared',
 'appears',
 'appendices',
 'applaud',
 'appliance',
 'applicability',
 'applicable',
 'applicants',
 'application',
 'applied',
 'applies',
 'apply',
 'applying',
 'appoint',
 'appointed',
 'appointee',
 'appointing',
 'appointment',
 'appointments',
 'appreciably',
 'appreciate',
 'appreciated',
 'appreciates',
 'appreciation',
 'appreciative',
 'approach',
 'approached',
 'approaches',
 'appropriate',
 'appropriated',
 'appropriately',
 'appropriates',
 'appropriation',
 'appropriations',
 'appropriators',
 'approval',
 'approve',
 'approved',
 'approving',
 'approximately',
 'april',
 'aptly',
 'aquatic',
 'ar',
 'arab',
 'arabia',
 'arbitrary',
 'arc',
 'architect',
 'architects',
 'architectural',
 'architecture',
 'ardently',
 'ardmore',
 'area',
 'areas',
 'arena',
 'argentina',
 'argue',
 'argued',
 'arguing',
 'argument',
 'argumentative',
 'arguments',
 'arid',
 'arise',
 'arisen',
 'arising',
 'aristocracies',
 'aristocracy',
 'arizona',
 'arkansas',
 'arm',
 'armed',
 'armies',
 'armor',
 'arms',
 'army',
 'arnold',
 'arnolds',
 'arrange',
 'arrangements',
 'array',
 'arrays',
 'arrested',
 'arrests',
 'arrival',
 'arrived',
 'arrogance',
 'arrogant',
 'arsenal',
 'art',
 'article',
 'articles',
 'articulate',
 'artificial',
 'artificially',
 'arts',
 'ascertain',
 'asfe',
 'asian',
 'aside',
 'asides',
 'ask',
 'asked',
 'asking',
 'asks',
 'aspect',
 'aspects',
 'asphyxiating',
 'assault',
 'assemble',
 'assembly',
 'assert',
 'asserted',
 'assertion',
 'assertions',
 'assess',
 'assessed',
 'assessing',
 'assessment',
 'assessments',
 'assets',
 'assigned',
 'assignment',
 'assigns',
 'assimilating',
 'assist',
 'assistance',
 'assistant',
 'assisted',
 'assisting',
 'assists',
 'assoc',
 'associate',
 'associated',
 'associates',
 'association',
 'associations',
 'assume',
 'assumed',
 'assumes',
 'assuming',
 'assumption',
 'assumptions',
 'assurance',
 'assurances',
 'assure',
 'assured',
 'assures',
 'assuring',
 'asthma',
 'astounding',
 'astronaut',
 'astronomical',
 'athletic',
 'atkins',
 'atla',
 'atlanta',
 'atm',
 'atmosphere',
 'attach',
 'attached',
 'attaching',
 'attack',
 'attacked',
 'attacks',
 'attain',
 'attainable',
 'attaining',
 'attempt',
 'attempted',
 'attempting',
 'attempts',
 'attend',
 'attended',
 'attending',
 'attention',
 'attest',
 'attitude',
 'attorney',
 'attorneys',
 'attract',
 'audit',
 'audited',
 'auditing',
 'audits',
 'august',
 'austin',
 'australia',
 'authentic',
 'author',
 'authoring',
 'authorities',
 'authority',
 'authorization',
 'authorizations',
 'authorize',
 'authorized',
 'authorizes',
 'authorizing',
 'authors',
 'auto',
 'autocratic',
 'automatic',
 'automatically',
 'automobile',
 'automotive',
 'autonomy',
 'avail',
 'availability',
 'available',
 'avalanche',
 'avenue',
 'average',
 'aviation',
 'avoid',
 ...]

Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words.


In [19]:
countvectorizer_new = CountVectorizer(max_features=100, stop_words='english')

In [20]:
tokens_new = countvectorizer_new.fit_transform(speeches_df['content'])

In [57]:
tokens_complete = pd.DataFrame(tokens_new.toarray(), columns=countvectorizer_new.get_feature_names())

Now let's push all of that into a dataframe with nicely named columns.


In [23]:
only_100_tokens = pd.DataFrame(tokens_new.toarray(), columns=countvectorizer_new.get_feature_names())
only_100_tokens


Out[23]:
000 11 act allow amendment america american amp association balance based believe bipartisan chairman children ... teachers thank think time today trade united urge vote want way work year years yield
0 0 1 3 0 0 0 3 0 0 0 0 1 0 3 0 ... 0 1 3 3 2 0 1 0 0 1 1 0 0 0 1
1 0 0 1 1 1 0 0 0 0 1 0 0 0 2 0 ... 0 1 0 2 2 0 0 0 1 1 3 0 1 0 0
2 0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 ... 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1
3 0 0 0 0 0 1 0 0 0 1 0 0 0 2 0 ... 0 0 0 2 0 0 1 0 1 1 1 0 0 0 0
4 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 ... 0 1 0 1 0 0 0 0 2 0 0 0 0 0 2
5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
6 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
8 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0
9 0 0 0 1 0 0 1 0 0 0 0 0 0 2 0 ... 0 0 2 0 0 0 1 0 0 0 0 0 0 0 0
10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12 0 1 1 0 1 0 0 0 0 1 0 1 0 3 0 ... 0 0 1 11 0 0 2 0 1 2 1 0 0 0 1
13 0 8 8 0 0 0 0 0 0 1 0 1 1 2 0 ... 0 0 0 1 0 0 1 0 1 0 1 0 4 1 1
14 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
15 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 3 3 0 0 0 1 0 0 0 3 2 0 0
16 1 0 0 0 2 1 0 0 0 0 0 0 0 1 0 ... 0 0 1 1 0 0 0 0 0 0 0 0 2 0 0
17 0 0 0 2 4 0 0 0 0 0 0 1 1 3 0 ... 0 1 3 8 2 0 0 0 1 4 1 3 2 1 0
18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 2 0 0 0 0 0 0 0 0 0 0 2
20 0 5 1 0 0 3 2 0 0 1 2 2 2 0 0 ... 0 0 0 3 3 0 1 1 0 0 0 0 0 1 0
21 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1
22 0 2 0 0 1 1 0 0 0 1 0 1 0 0 0 ... 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1
23 0 0 0 0 4 0 0 0 0 1 0 0 0 0 0 ... 0 0 0 2 0 0 0 1 0 1 0 0 0 0 1
24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
25 2 0 0 0 1 0 0 0 0 0 1 0 0 0 0 ... 0 0 0 2 0 0 0 0 0 0 0 0 1 0 2
26 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 ... 0 0 0 2 0 0 0 1 0 0 0 0 0 0 2
27 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
28 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
29 2 0 0 0 1 0 0 0 0 1 0 1 0 4 0 ... 0 0 0 2 0 0 1 0 0 0 0 2 1 0 1
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
672 0 5 1 0 7 0 0 0 0 1 0 1 0 4 0 ... 0 1 0 2 2 0 1 1 0 0 0 0 0 0 1
673 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
674 0 6 0 0 0 0 0 0 0 0 0 0 0 2 0 ... 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1
675 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1
676 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 ... 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
677 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
678 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
679 0 4 0 0 0 0 0 0 0 1 0 0 0 2 0 ... 0 0 2 2 0 0 0 1 0 2 0 0 0 0 2
680 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
681 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 ... 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
682 1 0 3 0 0 1 4 0 7 1 0 0 0 4 1 ... 0 0 1 2 1 0 1 0 0 1 0 0 1 2 1
683 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
684 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
685 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 ... 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
686 1 3 2 1 0 2 0 0 0 0 0 0 0 2 1 ... 1 0 0 3 2 0 0 1 0 0 0 0 1 0 1
687 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 ... 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1
688 0 1 0 0 6 0 0 0 0 1 0 0 2 3 0 ... 0 0 0 2 1 0 0 1 0 0 0 0 0 0 2
689 0 3 2 0 4 0 0 0 1 1 4 0 0 6 0 ... 0 0 0 3 0 0 0 1 1 0 0 0 0 0 1
690 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 ... 0 0 0 1 0 0 0 0 0 2 0 0 0 0 2
691 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 ... 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
692 0 0 0 1 7 3 1 0 1 1 0 1 0 7 0 ... 0 0 3 4 1 0 0 1 1 0 0 0 1 1 2
693 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
694 0 0 2 0 0 1 1 0 0 1 0 0 0 0 0 ... 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1
695 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
696 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 ... 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0
697 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 ... 0 0 0 0 0 0 2 0 0 0 1 0 0 0 0
698 0 1 0 0 2 0 2 0 0 0 0 0 0 1 0 ... 0 0 1 0 0 0 2 0 1 0 1 0 0 0 0
699 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
700 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 1 1 0 0 0 1 2 2 2 2 0 0 0
701 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0

702 rows × 100 columns

Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and many don't mention "chairman" and how many mention neither "mr" nor "chairman"?


In [25]:
only_100_tokens.describe()
#There are 702 speeches in this Dataframe


Out[25]:
000 11 act allow amendment america american amp association balance based believe bipartisan chairman children ... teachers thank think time today trade united urge vote want way work year years yield
count 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 ... 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000 702.000000
mean 0.205128 0.233618 0.441595 0.219373 1.190883 0.222222 0.407407 0.465812 0.407407 0.267806 0.309117 0.220798 0.186610 1.247863 0.666667 ... 0.212251 0.280627 0.313390 1.116809 0.360399 0.531339 0.294872 0.254986 0.418803 0.336182 0.233618 0.260684 0.321937 0.273504 0.574074
std 1.210075 1.841697 1.934411 0.897148 3.827676 1.026978 1.759215 10.931813 8.509915 0.520140 2.302365 0.868514 1.417515 1.558590 3.436400 ... 2.541348 0.706316 0.899401 1.779749 0.882400 3.117483 1.529529 1.548713 1.291292 0.876595 0.687868 0.899592 0.875439 0.956610 0.679481
min 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
25% 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
50% 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 ... 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
75% 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 2.000000 0.000000 ... 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000
max 25.000000 44.000000 35.000000 18.000000 87.000000 17.000000 26.000000 288.000000 224.000000 4.000000 56.000000 14.000000 35.000000 18.000000 74.000000 ... 65.000000 9.000000 9.000000 19.000000 9.000000 63.000000 31.000000 39.000000 23.000000 9.000000 9.000000 12.000000 7.000000 16.000000 4.000000

8 rows × 100 columns


In [38]:
only_100_tokens['chairman_less'] = only_100_tokens['chairman'] == 0
only_100_tokens['chairman_less'].describe()


Out[38]:
count       702
unique        2
top       False
freq        452
Name: chairman_less, dtype: object

In [39]:
print('There are',702-452,'Speeches withou the word chairman in it')


There are 250 Speeches withou the word chairman in it

In [40]:
only_100_tokens['mr_less'] = only_100_tokens['mr'] == 0
only_100_tokens['mr_less'].describe()


Out[40]:
count       702
unique        2
top       False
freq        623
Name: mr_less, dtype: object

In [41]:
print('There are',702-623,'Speeches withou the word chairman in it')


There are 79 Speeches withou the word chairman in it

What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times?


In [47]:
only_100_tokens['thank'].sort_values(ascending=False).head(1)


Out[47]:
577    9
Name: thank, dtype: int64

If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser?


In [46]:
only_100_tokens['china trade'] = only_100_tokens['china'] + only_100_tokens['trade']
only_100_tokens['china trade'].sort_values(ascending=False).head(3)


Out[46]:
379    92
399    36
345    27
Name: china trade, dtype: int64

Now what if I'm using a TfidfVectorizer?


In [ ]:

What's the content of the speeches? Here's a way to get them:


In [48]:
# index 0 is the first speech, which was the first one imported.
paths[0]


Out[48]:
'convote_v1.1/data_stage_one/development_set/052_400011_0327014_DON.txt'

In [49]:
# Pass that into 'cat' using { } which lets you put variables in shell commands
# that way you can pass the path to cat
!cat {paths[0]}


mr. chairman , i thank the gentlewoman for yielding me this time . 
my good colleague from california raised the exact and critical point . 
the question is , what happens during those 45 days ? 
we will need to support elections . 
there is not a single member of this house who has not supported some form of general election , a special election , to replace the members at some point . 
but during that 45 days , what happens ? 
the chair of the constitution subcommittee says this is what happens : martial law . 
we do not know who would fill the vacancy of the presidency , but we do know that the succession act most likely suggests it would be an unelected person . 
the sponsors of the bill before us today insist , and i think rightfully so , on the importance of elections . 
but to then say that during a 45-day period we would have none of the checks and balances so fundamental to our constitution , none of the separation of powers , and that the presidency would be filled by an unelected member of the cabinet who not a single member of this country , not a single citizen , voted to fill that position , and that that person would have no checks and balances from congress for a period of 45 days i find extraordinary . 
i find it inconsistent . 
i find it illogical , and , frankly , i find it dangerous . 
the gentleman from wisconsin refused earlier to yield time , but i was going to ask him , if virginia has those elections in a shorter time period , they should be commended for that . 
so now we have a situation in the congress where the virginia delegation has sent their members here , but many other states do not have members here . 
do they at that point elect a speaker of the house in the absence of other members ? 
and then three more states elect their representatives , temporary replacements , or full replacements at that point . 
they come in . 
do they elect a new speaker ? 
and if that happens , who becomes the president under the succession act ? 
this bill does not address that question . 
this bill responds to real threats with fantasies . 
it responds with the fantasy , first of all , that a lot of people will still survive ; but we have no guarantee of that . 
it responds with the fantasy that those who do survive will do the right thing . 
we are here having this debate , we have debates every day , because people differ on what the right thing is to do . 
i have been in very traumatic situations with people in severe car wrecks and mountain climbing accidents . 
my experience has not been that crisis imbues universal sagacity and fairness . 
it has not been that . 
people respond in extraordinary ways , and we must preserve an institution that has the deliberative body and the checks and balances to meet those challenges . 
many of our states are going increasingly to mail-in ballots . 
we in this body were effectively disabled by an anthrax attack not long after september 11 . 
i would ask my dear friends , will you conduct this election in 45 days if there is anthrax in the mail and still preserve the franchise of the american people ? 
how will you do that ? 
you have no answer to that question . 
i find it extraordinary , frankly , that while saying you do not want to amend the constitution , we began this very congress by amending the constitution through the rule , by undermining the principle that a quorum is 50 percent of the body and instead saying it is however many people survive . 
and if that rule applies , who will designate it , who will implement it ? 
the speaker , or the speaker 's designee ? 
again , not an elected person , as you say is so critical and i believe is critical , but a temporary appointee , frankly , who not a single other member of this body knows who they are . 
so we not only have an unelected person , we have an unknown person who will convene this body , and who , by the way , could conceivably convene it for their own election to then become the president of the united states under the succession act . 
you have refused steadfastly to debate this real issue broadly . 
you had a mock debate in the committee on the judiciary in which the distinguished chairman presented my bill without allowing me the courtesy or dignity to defend it myself . 
and on that , you proudly say you defend democracy . 
sir , i think you dissemble in that regard . 
here is the fundamental question for us , my friends , and it is this : the american people are watching television and an announcement comes on and says the congress has been destroyed in a nuclear attack , the president and vice president are killed and the supreme court is dead and thousands of our citizens in this town are . 
what happens next ? 
under your bill , 45 days of chaos . 
apparently , according to the committee on the judiciary subcommittee on the constitution chairman , 45 days of marshal law , rule of this country by an unelected president with no checks and balances . 
or an alternative , an alternative which says quite simply that the people have entrusted the representatives they send here to make profound decisions , war , taxation , a host of other things , and those representatives would have the power under the bill of the gentleman from california ( mr. rohrabacher ) xz4003430 bill or mine to designate temporary successors , temporary , only until we can have a real election . 
the american people , in one scenario , are told we do not know who is going to run the country , we have no representatives ; where in another you will have temporary representatives carrying your interests to this great body while we deliberate and have real elections . 
that is the choice . 
you are making the wrong choice today if you think you have solved this problem . 

Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting.


In [67]:
tokens_complete['america'] = tokens_complete['america'].sort_values(ascending=False) >= 1
tokens_complete['america'].describe()


Out[67]:
count       702
unique        2
top       False
freq        627
Name: america, dtype: object

In [70]:
tokens_complete[tokens_complete['america'] == True].count().head(1)
#wow! Someone said america 75 times in an speech!


Out[70]:
000    75
dtype: int64

In [71]:
tokens_complete['elections'] = tokens_complete['elections'].sort_values(ascending=False) >= 1
tokens_complete['elections'].describe()
# the word elections has been present in nearly all speeches!


Out[71]:
count       702
unique        2
top       False
freq        664
Name: elections, dtype: object

Enough of this garbage, let's cluster

Using a simple counting vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.

Using a term frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.

Using a term frequency inverse document frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.


In [ ]:
## Im still working on this one!

In [ ]:


In [ ]:


In [ ]:


In [ ]:

Which one do you think works the best?


In [ ]:

Harry Potter time

I have a scraped collection of Harry Potter fanfiction at https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip.

I want you to read them in, vectorize them and cluster them. Use this process to find out the two types of Harry Potter fanfiction. What is your hypothesis?


In [ ]: