Let's apply what we have learned so far for a real case study.
Many people express opinions on the internet and social media sites.
Such opinions are a rich source of information for many applications:
Apply natural language processing (NLP), in particular sentiment analysis, over movie reviews
Data-preprocessing
Training a machine learning model to classify positive and negative movie reviews
Working with large text datasets using out-of-core learning
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
In [1]:
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,sklearn,nltk
The use of watermark
is optional. You can install this IPython extension via "pip install watermark
". For more information, please see: https://github.com/rasbt/watermark.
The IMDB movie review set can be downloaded from http://ai.stanford.edu/~amaas/data/sentiment/.
50,000 movie reviews, manually labeled as being positive or negative for classification.
In [2]:
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
If you have problems with creating the movie_data.csv
file in the previous chapter, you can find a download a zip archive at
https://github.com/1iyiwei/pyml/tree/master/code/datasets/movie
In [3]:
import urllib.request
import os
# the file we eventually need to access
csv_filename = 'movie_data.csv'
# a global variable to select data source: local or remote
data_source = 'local'
if data_source == 'local':
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = '../datasets/movie/'
zip_filename = 'movie_data.csv.zip'
else: # remote
url = 'http://ai.stanford.edu/~amaas/data/sentiment/'
basepath = '.'
zip_filename = 'aclImdb_v1.tar.gz'
remote_file = os.path.join(url, zip_filename)
local_file = os.path.join(basepath, zip_filename)
csv_file = os.path.join(basepath, csv_filename)
if not os.path.isfile(csv_file) and not os.path.isfile(local_file):
urllib.request.urlretrieve(remote_file, local_file)
After downloading the dataset, decompress the files.
A) If you are working with Linux or MacOS X, open a new terminal windowm cd
into the download directory and execute
tar -zxf aclImdb_v1.tar.gz
or
tar -xvzf aclImdb_v1.tar.gz
for the verbose mode
B) If you are working with Windows, download an archiver such as 7Zip to extract the files from the download archive.
C) The code below decompresses directly via Python.
In [4]:
# The code below decompresses directly via Python.
import os
import zipfile
import tarfile
# change the `basepath` to the directory of the
# unzipped movie dataset
csv_file = os.path.join(basepath, csv_filename)
zip_file = os.path.join(basepath, zip_filename)
if not os.path.isfile(csv_file):
if tarfile.is_tarfile(zip_file):
tartar = tarfile.open(zip_file, "r")
#with tarfile.TarFile(zip_file, "r") as tartar:
tartar.extractall(basepath)
tartar.close()
else:
with zipfile.ZipFile(zip_file, "r") as zipper:
zipper.extractall(basepath)
zipper.close()
I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to 'utf-8'
, which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute
>>> import sys
>>> sys.getdefaultencoding()
If the returned result is not 'utf-8'
, you probably need to change your Python's encoding to 'utf-8'
, for example by typing export PYTHONIOENCODING=utf8
in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch ipython notebook
.
Alternatively, you can replace the lines
with open(os.path.join(path, file), 'r') as infile:
...
pd.read_csv('./movie_data.csv')
...
df.to_csv('./movie_data.csv', index=False)
by
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
...
pd.read_csv('./movie_data.csv', encoding='utf-8')
...
df.to_csv('./movie_data.csv', index=False, encoding='utf-8')
in the following cells to achieve the desired effect.
In [5]:
import pyprind
import pandas as pd
import os
db_path = 'aclImdb';
if not os.path.isfile(csv_file):
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(db_path, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
Optional: Saving the assembled data as CSV file:
In [6]:
if not os.path.isfile(csv_file):
df.to_csv(os.path.join(basepath, csv_filename), index=False, encoding='utf-8')
Read back the data-frame from file, local or remote.
In [7]:
import pandas as pd
df = pd.read_csv(os.path.join(basepath, csv_filename), encoding='utf-8')
Shuffling the DataFrame:
In [8]:
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
In [9]:
# first few entries
df.head(3)
Out[9]:
In [10]:
# a complete review
print(df.values[0])
Movie reviews vary in lengths
We need to convert the dataset into numerical form
Bag-of-words: represent text as numerical feature vectors
The feature vector would be sparse since most of the entries are $0$
By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:
In [11]:
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
# fixed-dimension features we can use for machine learning
bag = count.fit_transform(docs)
Print the contents of the vocabulary to get a better understanding of the underlying concepts:
In [12]:
# the dictionary trained from the document data
print(count.vocabulary_)
The vocabulary is stored in a Python dictionary
In [13]:
# convert from sparse dictionary to dense array
# fixed dimension feature
print(bag.toarray())
Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary.
For example, the 1st feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences.
Those values in the feature vectors are also called the raw term frequencies: tf (t,d)—the number of times a term t occurs in a document d.
Term-frequency (tf) alone is not enough.
Also consider inverse document frequency (idf)
The tf-idf can be defined as the product of the term frequency and the inverse document frequency: $$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$
Here the tf(t, d) is the term frequency introduced above, and the inverse document frequency $idf(t, d)$ can be calculated as: $$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)}$$
$idf$ gives higher weights to rarer words.
Note
Scikit-learn implements yet another transformer, the TfidfTransformer
, that takes the raw term frequencies from CountVectorizer
as input and transforms them into tf-idfs:
In [14]:
np.set_printoptions(precision=2)
In [15]:
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
As we saw in the previous subsection, the word 'is' had the largest term frequency in the 3rd document, being the most frequently occurring word.
However, after transforming the same feature vector into tf-idfs, we see that the word is is now associated with a relatively small tf-idf (0.45) in document 3 since it is also contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information.
The equations for the idf and tf-idf that were implemented in scikit-learn are:
$$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:
$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the TfidfTransformer
normalizes the tf-idfs directly.
By default (norm='l2'
), scikit-learn's TfidfTransformer applies the L2-normalization, which
returns a vector of length 1 by dividing an un-normalized feature vector v by its L2-norm:
To make sure that we understand how TfidfTransformer works, let us walk through an example and calculate the tf-idf of the word is in the 3rd document.
The word is
has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:
Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:
$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
In [16]:
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29].
However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously.
The final step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows:
As we can see, the results match the results returned by scikit-learn's TfidfTransformer
(below).
In [17]:
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True) # notice norm is None not l2
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1] # for the last document
raw_tfidf
Out[17]:
In [18]:
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
Out[18]:
In [19]:
df.loc[0, 'review'][-50:]
Out[19]:
Remove all punctuations except for emoticons which convey sentiments.
In [20]:
import re
def preprocessor(text):
# [] for set of characters, ^ inside [] means invert, i.e. not > below
# * means 0 or more occurances of the pattern
text = re.sub('<[^>]*>', '', text) # remove html tags between pairs of < and >
# () for group, subpart of the whole pattern we look for
# findall will return tuples each containing groups
# (?:) means not returing the group result for findall
# | means or, \ for escape sequence
# first group eye : or ; or =
# second group nose - 0 or 1 time via ?
# third group mouth ) or ( or D or P
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
# matching examples:
# :-)
# =D
# convert to lower case as upper/lower case doesn't matter for sentiment
# replace all non-word characters by space
# \w: letters, digits, _
# \W: the complement set
text = re.sub('[\W]+', ' ', text.lower())
# add back emoticons, though in different orders
# and without nose "-", e.g. :) and :-) are considered the same
text = text + ' '.join(emoticons).replace('-', '')
return text
In [21]:
preprocessor(df.loc[0, 'review'][-50:])
Out[21]:
In [22]:
preprocessor("</a>This :) is :( a test :-)!")
Out[22]:
Emotions are moved to the end; ordering doesn't matter for 1-gram analysis.
In [23]:
df['review'] = df['review'].apply(preprocessor)
Split an entity into constituting components, e.g. words for documents.
Stemming: transform a word into root form.
In [24]:
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
# split along white spaces
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
In [25]:
tokenizer('runners like running and thus they run')
Out[25]:
In [26]:
tokenizer_porter('runners like running and thus they run')
Out[26]:
In [27]:
import nltk
nltk.download('stopwords')
Out[27]:
In [28]:
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
Out[28]:
In [29]:
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
In [30]:
# Use a smaller subset if it took too long to run the full datasets above
train_subset_size = 2500
test_subset_size = 2500
#print(X_train.shape)
if train_subset_size > 0:
X_train = X_train[:train_subset_size]
y_train = y_train[:train_subset_size]
if test_subset_size > 0:
X_test = X_test[:test_subset_size]
y_test = y_test[:test_subset_size]
#print(X_train.shape)
In [31]:
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=1)
In [32]:
gs_lr_tfidf.fit(X_train, y_train)
Out[32]:
In [33]:
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
The CV accuracy and test accuracy would be a bit lower if we use a subset of all data, but are still reasonable.
In [34]:
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
Please note that gs_lr_tfidf.best_score_
is the average k-fold cross-validation score. I.e., if we have a GridSearchCV
object with 5-fold cross-validation (like the one above), the best_score_
attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
In [35]:
from sklearn.linear_model import LogisticRegression
import numpy as np
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import cross_val_score
else:
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
if Version(sklearn_version) < '0.18':
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
else:
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
Out[35]:
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (cv3_idx
) to the cross_val_score
scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds.
Next, let us use the GridSearchCV
object and feed it the same 5 cross-validation sets (via the pre-generated cv3_idx
indices):
In [36]:
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
As we can see, the scores for the 5 folds are exactly the same as the ones from cross_val_score
earlier.
Now, the bestscore attribute of the GridSearchCV
object, which becomes available after fit
ting, returns the average accuracy score of the best model:
In [37]:
gs.best_score_
Out[37]:
As we can see, the result above is consistent with the average score computed the cross_val_score
.
In [38]:
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
Out[38]:
Popular for text classification, e.g. spam filtering.
See http://sebastianraschka.com/Articles/2014_naive_bayes_1.html for more details.
The grid-search in the previous section is quite computationally expensive.
But real world datasets can be much larger!
Out-of-core learning can help us deal with large datasets without super-computers.
SGDClassifier: stochastic gradient descent classifier
In [39]:
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
In [40]:
next(stream_docs(path=csv_file))
Out[40]:
In [41]:
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
CountVectorizer holds complete vocabulary in memory
TfidfVectorizer keeps all training data in memory
HashVectorizer comes for rescue
In [42]:
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21, # large enough to minimize has collision
preprocessor=None,
tokenizer=tokenizer)
# logistic regression for loss
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path=csv_file)
In [43]:
# full size
num_batches = 45
batch_size = 1000
test_size = 5000
In [44]:
# subset if the fullset took too long to run
batch_size = 100
test_size = 500
In [45]:
import pyprind
pbar = pyprind.ProgBar(num_batches)
classes = np.array([0, 1])
for _ in range(num_batches):
X_train, y_train = get_minibatch(doc_stream, size=batch_size)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
In [46]:
X_test, y_test = get_minibatch(doc_stream, size=test_size)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
In [47]:
clf = clf.partial_fit(X_test, y_test)