This unit is divided into two sections:

  • First, we'll find out what what is necessary to build an NLP system that can turn a body of text into a numerical array of features.
  • Next we'll show how to perform these steps using real tools.

Building a Natural Language Processor From Scratch

In this section we'll use basic Python to build a rudimentary NLP system. We'll build a corpus of documents (two small text files), create a vocabulary from all the words in both documents, and then demonstrate a Bag of Words technique to extract features from each document.

**This first section is for illustration only!**
Don't bother memorizing the code - we'd never do this in real life.

Start with some documents:

For simplicity we won't use any punctuation.


In [1]:
%%writefile 1.txt
This is a story about cats
our feline pets
Cats are furry animals


Overwriting 1.txt

In [2]:
%%writefile 2.txt
This story is about surfing
Catching waves is fun
Surfing is a popular water sport


Overwriting 2.txt

Build a vocabulary

The goal here is to build a numerical array from all the words that appear in every document. Later we'll create instances (vectors) for each individual document.


In [3]:
vocab = {}
i = 1

with open('1.txt') as f:
    x = f.read().lower().split()

for word in x:
    if word in vocab:
        continue
    else:
        vocab[word]=i
        i+=1

print(vocab)


{'this': 1, 'is': 2, 'a': 3, 'story': 4, 'about': 5, 'cats': 6, 'our': 7, 'feline': 8, 'pets': 9, 'are': 10, 'furry': 11, 'animals': 12}

In [4]:
with open('2.txt') as f:
    x = f.read().lower().split()

for word in x:
    if word in vocab:
        continue
    else:
        vocab[word]=i
        i+=1

print(vocab)


{'this': 1, 'is': 2, 'a': 3, 'story': 4, 'about': 5, 'cats': 6, 'our': 7, 'feline': 8, 'pets': 9, 'are': 10, 'furry': 11, 'animals': 12, 'surfing': 13, 'catching': 14, 'waves': 15, 'fun': 16, 'popular': 17, 'water': 18, 'sport': 19}

Even though 2.txt has 15 words, only 7 new words were added to the dictionary.

Feature Extraction

Now that we've encapsulated our "entire language" in a dictionary, let's perform feature extraction on each of our original documents:


In [5]:
# Create an empty vector with space for each word in the vocabulary:
one = ['1.txt']+[0]*len(vocab)
one


Out[5]:
['1.txt', 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

In [6]:
# map the frequencies of each word in 1.txt to our vector:
with open('1.txt') as f:
    x = f.read().lower().split()
    
for word in x:
    one[vocab[word]]+=1
    
one


Out[6]:
['1.txt', 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]

We can see that most of the words in 1.txt appear only once, although "cats" appears twice.


In [7]:
# Do the same for the second document:
two = ['2.txt']+[0]*len(vocab)

with open('2.txt') as f:
    x = f.read().lower().split()
    
for word in x:
    two[vocab[word]]+=1

In [8]:
# Compare the two vectors:
print(f'{one}\n{two}')


['1.txt', 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]
['2.txt', 1, 3, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 1, 1, 1, 1, 1, 1]

By comparing the vectors we see that some words are common to both, some appear only in 1.txt, others only in 2.txt. Extending this logic to tens of thousands of documents, we would see the vocabulary dictionary grow to hundreds of thousands of words. Vectors would contain mostly zero values, making them sparse matrices.

Bag of Words and Tf-idf

In the above examples, each vector can be considered a bag of words. By itself these may not be helpful until we consider term frequencies, or how often individual words appear in documents. A simple way to calculate term frequencies is to divide the number of occurrences of a word by the total number of words in the document. In this way, the number of times a word appears in large documents can be compared to that of smaller documents.

However, it may be hard to differentiate documents based on term frequency if a word shows up in a majority of documents. To handle this we also consider inverse document frequency, which is the total number of documents divided by the number of documents that contain the word. In practice we convert this value to a logarithmic scale, as described here.

Together these terms become tf-idf.

Stop Words and Word Stems

Some words like "the" and "and" appear so frequently, and in so many documents, that we needn't bother counting them. Also, it may make sense to only record the root of a word, say cat in place of both cat and cats. This will shrink our vocab array and improve performance.

Tokenization and Tagging

When we created our vectors the first thing we did was split the incoming text on whitespace with .split(). This was a crude form of tokenization - that is, dividing a document into individual words. In this simple example we didn't worry about punctuation or different parts of speech. In the real world we rely on some fairly sophisticated morphology to parse text appropriately.

Once the text is divided, we can go back and tag our tokens with information about parts of speech, grammatical dependencies, etc. This adds more dimensions to our data and enables a deeper understanding of the context of specific documents. For this reason, vectors become high dimensional sparse matrices.

**That's the end of the first section.**
In the next section we'll use scikit-learn to perform a real-life analysis.

Feature Extraction from Text

In the Scikit-learn Primer lecture we applied a simple SVC classification model to the SMSSpamCollection dataset. We tried to predict the ham/spam label based on message length and punctuation counts. In this section we'll actually look at the text of each message and try to perform a classification based on content. We'll take advantage of some of scikit-learn's feature extraction tools.

Load a dataset


In [9]:
# Perform imports and load the dataset:
import numpy as np
import pandas as pd

df = pd.read_csv('../TextFiles/smsspamcollection.tsv', sep='\t')
df.head()


Out[9]:
label message length punct
0 ham Go until jurong point, crazy.. Available only ... 111 9
1 ham Ok lar... Joking wif u oni... 29 6
2 spam Free entry in 2 a wkly comp to win FA Cup fina... 155 6
3 ham U dun say so early hor... U c already then say... 49 6
4 ham Nah I don't think he goes to usf, he lives aro... 61 2

Check for missing values:

Always a good practice.


In [10]:
df.isnull().sum()


Out[10]:
label      0
message    0
length     0
punct      0
dtype: int64

Take a quick look at the ham and spam label column:


In [11]:
df['label'].value_counts()


Out[11]:
ham     4825
spam     747
Name: label, dtype: int64

4825 out of 5572 messages, or 86.6%, are ham. This means that any text classification model we create has to perform **better than 86.6%** to beat random chance.

Split the data into train & test sets:


In [12]:
from sklearn.model_selection import train_test_split

X = df['message']  # this time we want to look at the text
y = df['label']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)

Scikit-learn's CountVectorizer

Text preprocessing, tokenizing and the ability to filter out stopwords are all included in CountVectorizer, which builds a dictionary of features and transforms documents to feature vectors.


In [13]:
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()

X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape


Out[13]:
(3733, 7082)

This shows that our training set is comprised of 3733 documents, and 7082 features.

Transform Counts to Frequencies with Tf-idf

While counting words is helpful, longer documents will have higher average count values than shorter documents, even though they might talk about the same topics.

To avoid this we can simply divide the number of occurrences of each word in a document by the total number of words in the document: these new features are called tf for Term Frequencies.

Another refinement on top of tf is to downscale weights for words that occur in many documents in the corpus and are therefore less informative than those that occur only in a smaller portion of the corpus.

This downscaling is called tf–idf for “Term Frequency times Inverse Document Frequency”.

Both tf and tf–idf can be computed as follows using TfidfTransformer:


In [14]:
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()

X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape


Out[14]:
(3733, 7082)

Note: the fit_transform() method actually performs two operations: it fits an estimator to the data and then transforms our count-matrix to a tf-idf representation.

Combine Steps with TfidVectorizer

In the future, we can combine the CountVectorizer and TfidTransformer steps into one using TfidVectorizer:


In [15]:
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()

X_train_tfidf = vectorizer.fit_transform(X_train) # remember to use the original X_train set
X_train_tfidf.shape


Out[15]:
(3733, 7082)

Train a Classifier

Here we'll introduce an SVM classifier that's similar to SVC, called LinearSVC. LinearSVC handles sparse input better, and scales well to large numbers of samples.


In [16]:
from sklearn.svm import LinearSVC
clf = LinearSVC()
clf.fit(X_train_tfidf,y_train)


Out[16]:
LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,
     intercept_scaling=1, loss='squared_hinge', max_iter=1000,
     multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,
     verbose=0)

Earlier we named our SVC classifier **svc_model**. Here we're using the more generic name **clf** (for classifier).

Build a Pipeline

Remember that only our training set has been vectorized into a full vocabulary. In order to perform an analysis on our test set we'll have to submit it to the same procedures. Fortunately scikit-learn offers a Pipeline class that behaves like a compound classifier.


In [17]:
from sklearn.pipeline import Pipeline
# from sklearn.feature_extraction.text import TfidfVectorizer
# from sklearn.svm import LinearSVC

text_clf = Pipeline([('tfidf', TfidfVectorizer()),
                     ('clf', LinearSVC()),
])

# Feed the training data through the pipeline
text_clf.fit(X_train, y_train)


Out[17]:
Pipeline(memory=None,
     steps=[('tfidf', TfidfVectorizer(analyzer='word', binary=False, decode_error='strict',
        dtype=<class 'numpy.float64'>, encoding='utf-8', input='content',
        lowercase=True, max_df=1.0, max_features=None, min_df=1,
        ngram_range=(1, 1), norm='l2', preprocessor=None, smooth_idf=True,...ax_iter=1000,
     multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,
     verbose=0))])

Test the classifier and display results


In [18]:
# Form a prediction set
predictions = text_clf.predict(X_test)

In [19]:
# Report the confusion matrix
from sklearn import metrics
print(metrics.confusion_matrix(y_test,predictions))


[[1586    7]
 [  12  234]]

In [20]:
# Print a classification report
print(metrics.classification_report(y_test,predictions))


              precision    recall  f1-score   support

         ham       0.99      1.00      0.99      1593
        spam       0.97      0.95      0.96       246

   micro avg       0.99      0.99      0.99      1839
   macro avg       0.98      0.97      0.98      1839
weighted avg       0.99      0.99      0.99      1839


In [21]:
# Print the overall accuracy
print(metrics.accuracy_score(y_test,predictions))


0.989668297988

Using the text of the messages, our model performed exceedingly well; it correctly predicted spam 98.97% of the time!
Now let's apply what we've learned to a text classification project involving positive and negative movie reviews.

Next up: Text Classification Project