In [1]:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -d -v -p scikit-learn,joblib,numpy,nltk
When we are applying machine learning algorithms to real-world applications, our computer hardware often still constitutes the major bottleneck of the learning process. Of course, we all have access to supercomputers, Amazon EC2, Apache Spark, etc. However, out-of-core learning via Stochastic Gradient Descent can still be attractive if we'd want to update our model on-the-fly ("online-learning"), and in this notebook, I want to provide some examples of how we can implement an "out-of-core" approach using scikit-learn. I compiled the following code examples for personal reference, and I don't intend it to be a comprehensive reference for the underlying theory, but nonetheless, I decided to share it since it may be useful to one or the other!
In this section, we will train a simple logistic regression model to classify movie reviews from the 50k IMDb review dataset that has been collected by Maas et. al.
AL Maas, RE Daly, PT Pham, D Huang, AY Ng, and C Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics
[Source: http://ai.stanford.edu/~amaas/data/sentiment/]
The dataset consists of 50,000 movie reviews from the original "train" and "test" subdirectories. The class labels are binary (1=positive and 0=negative) and contain 25,000 positive and 25,000 negative movie reviews, respectively. For simplicity, I assembled the reviews in a single CSV file.
In [2]:
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/rasbt/pattern_classification/master/data/50k_imdb_movie_reviews.csv')
df.tail()
Out[2]:
In the following sections, we will define some simple function to process the text data and read it from the CSV file in minibatches to train a logistic regression classifier via stochastic gradient descent. However, before we proceed to the next section, let us shuffle the class labels.
In [3]:
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
df[['review', 'sentiment']].to_csv('/Users/sebastian/Desktop/shuffled_movie_data.csv', index=False)
Now, let us define a simple tokenizer that splits the text into individual word tokens. Furthermore, we will use some simple regular expression to remove HTML markup and all non-letter characters but "emoticons," convert the text to lower case, remove stopwords, and apply the Porter stemming algorithm to convert the words into their root form.
In [4]:
import numpy as np
from nltk.stem.porter import PorterStemmer
import re
from nltk.corpus import stopwords
stop = stopwords.words('english')
porter = PorterStemmer()
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
text = [w for w in text.split() if w not in stop]
tokenized = [porter.stem(w) for w in text]
return text
Let's give it at try:
In [5]:
tokenizer('This :) is a <a> test! :-)</br>')
Out[5]:
First, we define a generator that returns the document body and the corresponding class label:
In [6]:
def stream_docs(path):
with open(path, 'r') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
To conform that the stream_docs function fetches the documents as intended, let us execute the following code snippet before we implement the get_minibatch function:
In [7]:
next(stream_docs(path='/Users/sebastian/Desktop/shuffled_movie_data.csv'))
Out[7]:
After we confirmed that our stream_docs functions works, we will now implement a get_minibatch function to fetch a specified number (size) of documents:
In [8]:
def get_minibatch(doc_stream, size):
docs, y = [], []
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
return docs, y
Next, we will make use of the "hashing trick" through scikit-learns HashingVectorizer to create a bag-of-words model of our documents. I don't want to go into the details of the bag-of-words model for document classification, but if you are interested, you can take a look at one of my articles, Naive Bayes and Text Classification I - Introduction and Theory, where I explained the concepts behind bag-of-words, tokenization, stemming, etc.
In [12]:
from sklearn.feature_extraction.text import HashingVectorizer
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
Using the SGDClassifier from scikit-learn, we will can instanciate a logistic regression classifier that learns from the documents incrementally using stochastic gradient descent. If you are curious about how this optimization algorithm works, please see my article on artificial neurons.
In [13]:
from sklearn.linear_model import SGDClassifier
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='/Users/sebastian/Desktop/shuffled_movie_data.csv')
In [14]:
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
Depending on your machine, it will take about 2-3 minutes to stream the documents and learn the weights for the logistic regression model to classify "new" movie reviews. Executing the preceding code, we used the first 45,000 movie reviews to train the classifier, which means that we have 5,000 reviews left for testing:
In [16]:
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
I think that the predictive performance, an accuarcy of ~87%, is quite "reasonable" given that we "only" used the default parameters and didn't do any hyperparameter optimization.
After we estimated the model perfomance, let us use those last 5,000 test samples to update our model.
In [17]:
clf = clf.partial_fit(X_test, y_test)
In the previous section, we successfully trained a model to predict the sentiment of a movie review. Unfortunately, if we'd close this IPython notebook at this point, we'd have to go through the whole learning process again and again if we'd want to make a prediction on "new data."
So, to reuse this model, we could use the pickle module to "serialize a Python object structure". Or even better, we could use the joblib library, which handles large NumPy arrays more efficiently.
In [18]:
import joblib
import os
if not os.path.exists('./pkl_objects'):
os.mkdir('./pkl_objects')
joblib.dump(vect, './outofcore_modelpersistence/vectorizer.pkl')
joblib.dump(clf, './outofcore_modelpersistence/clf.pkl')
Out[18]:
Using the code above, we "pickled" the HashingVectorizer and the SGDClassifier so that we can re-use those objects later. However, pickle and joblib have a known issue with pickling objects or functions from a __main__ block and we'd get an AttributeError: Can't get attribute [x] on <module '__main__'> if we'd unpickle it later. Thus, to pickle the tokenizer function, we can write it to a file and import it to get the namespace "right".
In [5]:
%%writefile tokenizer.py
from nltk.stem.porter import PorterStemmer
import re
from nltk.corpus import stopwords
stop = stopwords.words('english')
porter = PorterStemmer()
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
text = [w for w in text.split() if w not in stop]
tokenized = [porter.stem(w) for w in text]
return text
In [8]:
from tokenizer import tokenizer
joblib.dump(tokenizer, './outofcore_modelpersistence/tokenizer.pkl')
Out[8]:
Now, let us restart this IPython notebook and check if the we can load our serialized objects:
In [10]:
import joblib
tokenizer = joblib.load('./outofcore_modelpersistence/tokenizer.pkl')
vect = joblib.load('./outofcore_modelpersistence/vectorizer.pkl')
clf = joblib.load('./outofcore_modelpersistence/clf.pkl')
After loading the tokenizer, HashingVectorizer, and the tranined logistic regression model, we can use it to make predictions on new data, which can be useful, for example, if we'd want to embed our classifier into a web application -- a topic for another IPython notebook.
In [13]:
example = ['I did not like this movie']
X = vect.transform(example)
clf.predict(X)
Out[13]:
In [14]:
example = ['I loved this movie']
X = vect.transform(example)
clf.predict(X)
Out[14]:
In [ ]: