This unit is divided into two sections:
In this section we'll use basic Python to build a rudimentary NLP system. We'll build a corpus of documents (two small text files), create a vocabulary from all the words in both documents, and then demonstrate a Bag of Words technique to extract features from each document.
In [1]:
%%writefile 1.txt
This is a story about cats
our feline pets
Cats are furry animals
In [2]:
%%writefile 2.txt
This story is about surfing
Catching waves is fun
Surfing is a popular water sport
In [3]:
vocab = {}
i = 1
with open('1.txt') as f:
x = f.read().lower().split()
for word in x:
if word in vocab:
continue
else:
vocab[word]=i
i+=1
print(vocab)
In [4]:
with open('2.txt') as f:
x = f.read().lower().split()
for word in x:
if word in vocab:
continue
else:
vocab[word]=i
i+=1
print(vocab)
In [5]:
# Create an empty vector with space for each word in the vocabulary:
one = ['1.txt']+[0]*len(vocab)
one
Out[5]:
In [6]:
# map the frequencies of each word in 1.txt to our vector:
with open('1.txt') as f:
x = f.read().lower().split()
for word in x:
one[vocab[word]]+=1
one
Out[6]:
We can see that most of the words in 1.txt appear only once, although "cats" appears twice.
In [7]:
# Do the same for the second document:
two = ['2.txt']+[0]*len(vocab)
with open('2.txt') as f:
x = f.read().lower().split()
for word in x:
two[vocab[word]]+=1
In [8]:
# Compare the two vectors:
print(f'{one}\n{two}')
By comparing the vectors we see that some words are common to both, some appear only in 1.txt
, others only in 2.txt
. Extending this logic to tens of thousands of documents, we would see the vocabulary dictionary grow to hundreds of thousands of words. Vectors would contain mostly zero values, making them sparse matrices.
In the above examples, each vector can be considered a bag of words. By itself these may not be helpful until we consider term frequencies, or how often individual words appear in documents. A simple way to calculate term frequencies is to divide the number of occurrences of a word by the total number of words in the document. In this way, the number of times a word appears in large documents can be compared to that of smaller documents.
However, it may be hard to differentiate documents based on term frequency if a word shows up in a majority of documents. To handle this we also consider inverse document frequency, which is the total number of documents divided by the number of documents that contain the word. In practice we convert this value to a logarithmic scale, as described here.
Together these terms become tf-idf.
When we created our vectors the first thing we did was split the incoming text on whitespace with .split()
. This was a crude form of tokenization - that is, dividing a document into individual words. In this simple example we didn't worry about punctuation or different parts of speech. In the real world we rely on some fairly sophisticated morphology to parse text appropriately.
Once the text is divided, we can go back and tag our tokens with information about parts of speech, grammatical dependencies, etc. This adds more dimensions to our data and enables a deeper understanding of the context of specific documents. For this reason, vectors become high dimensional sparse matrices.
In the Scikit-learn Primer lecture we applied a simple SVC classification model to the SMSSpamCollection dataset. We tried to predict the ham/spam label based on message length and punctuation counts. In this section we'll actually look at the text of each message and try to perform a classification based on content. We'll take advantage of some of scikit-learn's feature extraction tools.
In [9]:
# Perform imports and load the dataset:
import numpy as np
import pandas as pd
df = pd.read_csv('../TextFiles/smsspamcollection.tsv', sep='\t')
df.head()
Out[9]:
In [10]:
df.isnull().sum()
Out[10]:
In [11]:
df['label'].value_counts()
Out[11]:
4825 out of 5572 messages, or 86.6%, are ham. This means that any text classification model we create has to perform **better than 86.6%** to beat random chance.
In [12]:
from sklearn.model_selection import train_test_split
X = df['message'] # this time we want to look at the text
y = df['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
Text preprocessing, tokenizing and the ability to filter out stopwords are all included in CountVectorizer, which builds a dictionary of features and transforms documents to feature vectors.
In [13]:
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape
Out[13]:
This shows that our training set is comprised of 3733 documents, and 7082 features.
While counting words is helpful, longer documents will have higher average count values than shorter documents, even though they might talk about the same topics.
To avoid this we can simply divide the number of occurrences of each word in a document by the total number of words in the document: these new features are called tf for Term Frequencies.
Another refinement on top of tf is to downscale weights for words that occur in many documents in the corpus and are therefore less informative than those that occur only in a smaller portion of the corpus.
This downscaling is called tf–idf for “Term Frequency times Inverse Document Frequency”.
Both tf and tf–idf can be computed as follows using TfidfTransformer:
In [14]:
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
Out[14]:
Note: the fit_transform()
method actually performs two operations: it fits an estimator to the data and then transforms our count-matrix to a tf-idf representation.
In the future, we can combine the CountVectorizer and TfidTransformer steps into one using TfidVectorizer:
In [15]:
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
X_train_tfidf = vectorizer.fit_transform(X_train) # remember to use the original X_train set
X_train_tfidf.shape
Out[15]:
Here we'll introduce an SVM classifier that's similar to SVC, called LinearSVC. LinearSVC handles sparse input better, and scales well to large numbers of samples.
In [16]:
from sklearn.svm import LinearSVC
clf = LinearSVC()
clf.fit(X_train_tfidf,y_train)
Out[16]:
Earlier we named our SVC classifier **svc_model**. Here we're using the more generic name **clf** (for classifier).
Remember that only our training set has been vectorized into a full vocabulary. In order to perform an analysis on our test set we'll have to submit it to the same procedures. Fortunately scikit-learn offers a Pipeline class that behaves like a compound classifier.
In [17]:
from sklearn.pipeline import Pipeline
# from sklearn.feature_extraction.text import TfidfVectorizer
# from sklearn.svm import LinearSVC
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf', LinearSVC()),
])
# Feed the training data through the pipeline
text_clf.fit(X_train, y_train)
Out[17]:
In [18]:
# Form a prediction set
predictions = text_clf.predict(X_test)
In [19]:
# Report the confusion matrix
from sklearn import metrics
print(metrics.confusion_matrix(y_test,predictions))
In [20]:
# Print a classification report
print(metrics.classification_report(y_test,predictions))
In [21]:
# Print the overall accuracy
print(metrics.accuracy_score(y_test,predictions))