News Group Classification w/ Scikit-Learn

Adopted from here, courtesy of the Scikit-Learn folks:

http://scikit-learn.org/stable/auto_examples/document_classification_20newsgroups.html

License: BSD 3 clause:

Copyright (c) <year>, <copyright holder>
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
    * Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.
    * Redistributions in binary form must reproduce the above copyright
      notice, this list of conditions and the following disclaimer in the
      documentation and/or other materials provided with the distribution.
    * Neither the name of the <organization> nor the
      names of its contributors may be used to endorse or promote products
      derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Some preliminary imports:


In [1]:
import numpy as np
import sys
from time import time
import matplotlib.pyplot as plt

Load some categories from the training set


In [2]:
categories = [
    'alt.atheism',
    'talk.religion.misc',
    'comp.graphics',
    'sci.space',
]
remove = ('headers', 'footers', 'quotes')

In [3]:
from sklearn.datasets import fetch_20newsgroups
data = fetch_20newsgroups(subset='all', categories=categories, shuffle=True, remove=remove)

In [4]:
categories = data.target_names
categories


Out[4]:
['alt.atheism', 'comp.graphics', 'sci.space', 'talk.religion.misc']

Let's take a peak at how much data we've got:


In [5]:
def size_mb(docs):
    return sum(len(s.encode('utf-8')) for s in docs) / 1e6

data_size_mb = size_mb(data.data)

print "%d documents - %0.3fMB" % (len(data.data), data_size_mb)
print "%d categories" % len(categories)


3387 documents - 4.228MB
4 categories

In [6]:
y = data.target

Extracting features from the training dataset using a sparse vectorizer:


In [7]:
from sklearn.feature_extraction.text import TfidfVectorizer
#from sklearn.feature_extraction.text import HashingVectorizer
t0 = time()

#vectorizer = HashingVectorizer(stop_words='english', non_negative=True, n_features=5000)
#X_train = vectorizer.transform(data_train.data)
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english')
X = vectorizer.fit_transform(data.data)

duration = time() - t0
print "done in %fs at %0.3fMB/s" % (duration, data_size_mb / duration)
print "n_samples: %d, n_features: %d" % X.shape


done in 0.949662s at 4.452MB/s
n_samples: 3387, n_features: 33530

In [8]:
# mapping from integer feature name to original token string
# if using hash vectorizer:
#feature_names = None
feature_names = np.asarray(vectorizer.get_feature_names())

In [9]:
len(feature_names), feature_names[:5]


Out[9]:
(33530,
 array([u'00', u'000', u'0000', u'00000', u'000000'], 
      dtype='<U80'))

In [10]:
from sklearn import metrics

def benchmark(clf, X_train, y_train, X_test, y_test):
    clf.fit(X_train, y_train)
    pred = clf.predict(X_test)
    score = metrics.f1_score(y_test, pred)
    #print "f1-score:   %0.3f" % score
    #print metrics.classification_report(y_test, pred, target_names=categories)
    #print metrics.confusion_matrix(y_test, pred)
    return score

Benchmark classifiers:


In [11]:
from sklearn.cross_validation import ShuffleSplit
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import BernoulliNB, MultinomialNB
from collections import defaultdict

classifiers = [
    (LinearSVC, {'loss': 'l2', 'penalty': 'l2', 'dual': False, 'tol': 1e-3}),
    (MultinomialNB, {'alpha': .01}),
    (BernoulliNB, {'alpha': .01}),
]

xs = np.linspace(250, X.shape[0], 12).astype(int)
ys = defaultdict(list)
errors = defaultdict(list)
names = []
for clf, kwargs in classifiers:
    name = clf.__name__
    print name
    sys.stdout.flush()
    for x in xs:
        scores = []
        s = ShuffleSplit(x, n_iter=10, test_size=.1)
        for train, test in s:
            scores.append(benchmark(clf(**kwargs), X[train], y[train], X[test], y[test]))
        ys[name].append(np.mean(scores))
        errors[name].append(np.std(scores)*2)
    names.append(name)


LinearSVC
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/metrics/metrics.py:1249: UserWarning: The precision and recall are equal to zero for some labels. fbeta_score is ill defined for those labels [3]. 
  average=average)
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/metrics/metrics.py:1249: UserWarning: The sum of true positives and false positives are equal to zero for some labels. Precision is ill defined for those labels [3]. The precision and recall are equal to zero for some labels. fbeta_score is ill defined for those labels [3]. 
  average=average)
MultinomialNB
BernoulliNB
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/metrics/metrics.py:1249: UserWarning: The precision and recall are equal to zero for some labels. fbeta_score is ill defined for those labels [0]. 
  average=average)

In [12]:
plt.figure(figsize=(10, 8))
for name in names:
    plt.errorbar(xs, ys[name], yerr=errors[name], label=name)
plt.gca().set_xlim((0, X.shape[0]))
plt.gca().set_ylim((0, 1.))
plt.grid()
plt.gca().set_ylabel("score")
plt.gca().set_xlabel("# of documents")
plt.legend(loc=4)
plt.title("Corpus size (News Groups) vs. Classifier accuracy, w/ 95% confidence intervals")
plt.tight_layout()