Algorithms Exercise 1

Imports


In [49]:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np

Word counting

Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:

  • Split the string into lines using splitlines.
  • Split each line into a list of words and merge the lists for each line.
  • Use Python's builtin filter function to remove all punctuation.
  • If stop_words is a list, remove all occurences of the words in the list.
  • If stop_words is a space delimeted string of words, split them and remove them.
  • Remove any remaining empty words.
  • Make all words lowercase.

In [48]:
filter?

In [54]:
def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'):
    """Split a string into a list of words, removing punctuation and stop words."""
    t=[]
    S=s.splitlines()
    for i in S:
        w=i.split(' ')
        t.append(w)
    #filter(lambda x: x not in punctuation, t)
    T=filter(lambda x: x not in stop_words, t)
    return T
tokenize("This, is the way; that things will end", stop_words=['the', 'is'])


Out[54]:
<filter at 0x7f7e811de470>

In [36]:
tokenize("""
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
""", stop_words=['the', 'is'])


ERROR: An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line string', (1, 0))

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-36-962f3c0779b3> in <module>()
      4 Memory and desire, stirring
      5 Dull roots with spring rain.
----> 6 """, stop_words=['the', 'is'])

<ipython-input-35-d91c0cd169a9> in tokenize(s, stop_words, punctuation)
      3     S=s.splitlines()
      4     for i in S:
----> 5         i.split('')
      6     for i in S:
      7         filter(lambda x: x not in punctuation, S)

ValueError: empty separator

In [9]:
assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \
    ['this', 'way', 'that', 'things', 'will', 'end']
wasteland = """
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
"""

assert tokenize(wasteland, stop_words='is the of and') == \
    ['april','cruellest','month','breeding','lilacs','out','dead','land',
     'mixing','memory','desire','stirring','dull','roots','with','spring',
     'rain']


---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-9-cb7875c10e3e> in <module>()
----> 1 assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) ==     ['this', 'way', 'that', 'things', 'will', 'end']
      2 wasteland = """
      3 APRIL is the cruellest month, breeding
      4 Lilacs out of the dead land, mixing
      5 Memory and desire, stirring

AssertionError: 

In [10]:
tokenize(wasteland, stop_words='is the of and')


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-10-edff5d22501c> in <module>()
----> 1 tokenize(wasteland, stop_words='is the of and')

NameError: name 'wasteland' is not defined

Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.


In [ ]:
def count_words(data):
    """Return a word count dictionary from the list of words in data."""
    # YOUR CODE HERE
    raise NotImplementedError()

In [ ]:
assert count_words(tokenize('this and the this from and a a a')) == \
    {'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}

Write a function sort_word_counts that return a list of sorted word counts:

  • Each element of the list should be a (word, count) tuple.
  • The list should be sorted by the word counts, with the higest counts coming first.
  • To perform this sort, look at using the sorted function with a custom key and reverse argument.

In [ ]:
def sort_word_counts(wc):
    """Return a list of 2-tuples of (word, count), sorted by count descending."""
    # YOUR CODE HERE
    raise NotImplementedError()

In [ ]:
assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \
    [('a', 4), ('this', 3), ('and', 2), ('the', 1)]

Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:

  • Read the file into a string.
  • Tokenize with stop words of 'the of and a to in is it that as'.
  • Perform a word count, the sort and save the result in a variable named swc.

In [ ]:
# YOUR CODE HERE
raise NotImplementedError()

In [ ]:
assert swc[0]==('i',43)
assert len(swc)==848

Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...


In [ ]:
# YOUR CODE HERE
raise NotImplementedError()

In [ ]:
assert True # use this for grading the dotplot