Sklearn basic example

Fit a simple classification model to the iris database


In [1]:
from __future__ import print_function

from sklearn import __version__ as sklearn_version
print('Sklearn version:', sklearn_version)


Sklearn version: 0.18.1

Load data


In [2]:
from sklearn import datasets

iris = datasets.load_iris()
print(iris.DESCR)


Iris Plants Database
====================

Notes
-----
Data Set Characteristics:
    :Number of Instances: 150 (50 in each of three classes)
    :Number of Attributes: 4 numeric, predictive attributes and the class
    :Attribute Information:
        - sepal length in cm
        - sepal width in cm
        - petal length in cm
        - petal width in cm
        - class:
                - Iris-Setosa
                - Iris-Versicolour
                - Iris-Virginica
    :Summary Statistics:

    ============== ==== ==== ======= ===== ====================
                    Min  Max   Mean    SD   Class Correlation
    ============== ==== ==== ======= ===== ====================
    sepal length:   4.3  7.9   5.84   0.83    0.7826
    sepal width:    2.0  4.4   3.05   0.43   -0.4194
    petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)
    petal width:    0.1  2.5   1.20  0.76     0.9565  (high!)
    ============== ==== ==== ======= ===== ====================

    :Missing Attribute Values: None
    :Class Distribution: 33.3% for each of 3 classes.
    :Creator: R.A. Fisher
    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)
    :Date: July, 1988

This is a copy of UCI ML iris datasets.
http://archive.ics.uci.edu/ml/datasets/Iris

The famous Iris database, first used by Sir R.A Fisher

This is perhaps the best known database to be found in the
pattern recognition literature.  Fisher's paper is a classic in the field and
is referenced frequently to this day.  (See Duda & Hart, for example.)  The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant.  One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.

References
----------
   - Fisher,R.A. "The use of multiple measurements in taxonomic problems"
     Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
     Mathematical Statistics" (John Wiley, NY, 1950).
   - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
     (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.
   - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
     Structure and Classification Rule for Recognition in Partially Exposed
     Environments".  IEEE Transactions on Pattern Analysis and Machine
     Intelligence, Vol. PAMI-2, No. 1, 67-71.
   - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions
     on Information Theory, May 1972, 431-433.
   - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II
     conceptual clustering system finds 3 classes in the data.
   - Many, many more ...


In [3]:
# Print some data lines
print(iris.data[:10])
print(iris.target)


[[ 5.1  3.5  1.4  0.2]
 [ 4.9  3.   1.4  0.2]
 [ 4.7  3.2  1.3  0.2]
 [ 4.6  3.1  1.5  0.2]
 [ 5.   3.6  1.4  0.2]
 [ 5.4  3.9  1.7  0.4]
 [ 4.6  3.4  1.4  0.3]
 [ 5.   3.4  1.5  0.2]
 [ 4.4  2.9  1.4  0.2]
 [ 4.9  3.1  1.5  0.1]]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2]

In [4]:
#Randomize and separate train & test
from sklearn.utils import shuffle
X, y = shuffle(iris.data, iris.target, random_state=0)

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0)

print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)


(100, 4) (50, 4) (100,) (50,)

Linear model


In [5]:
# Linear model 
from sklearn.linear_model import LogisticRegression

# Define classifier
clf_logistic = LogisticRegression()

# Fit classifier
clf_logistic.fit(X_train, y_train)


Out[5]:
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
          intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
          penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
          verbose=0, warm_start=False)

In [6]:
# Evaluate accuracy in test
from sklearn.metrics import accuracy_score

# Predict test data
y_test_pred = clf_logistic.predict(X_test)

# Evaluate accuracy
print('Accuracy test: ', accuracy_score(y_test, y_test_pred))


Accuracy test:  0.94

Decision tree

- Build a second decision tree model to compare with the previous linear model
- Print Accuracy and ROC area

In [7]:
from sklearn import tree

# Define classifier
clf_tree = tree.DecisionTreeClassifier(max_depth=3)

# Fit
clf_tree.fit(X_train, y_train)

# Evaluate test accuracy
print('Tree accuracy test: ', accuracy_score(y_test, clf_tree.predict(X_test)))


Tree accuracy test:  0.94

In [8]:
# Configure model
from sklearn import svm
clf_svc = svm.LinearSVC()

# Fit over train
clf_svc.fit(X_train, y_train)

# Accuracy score over test
print(accuracy_score(y_test, clf_svc.predict(X_test)))


0.96

ROC area


In [9]:
# ROC area

# Print probabilities
y_test_proba = clf_logistic.predict_proba(X_test)
print(y_test_proba[:5])


#Recode y from multiclass labels to binary labels
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit(y_train)
print('Test classes: ',lb.classes_)
y_test_bin = lb.transform(y_test)
print(y_test_bin[:5])


# Roc curve
from sklearn.metrics import roc_auc_score
print('Average ROC area: ', roc_auc_score(y_test_bin, y_test_proba))


[[  2.43289285e-05   4.36928609e-01   5.63047062e-01]
 [  1.19671489e-03   3.16095051e-01   6.82708234e-01]
 [  4.61332864e-03   3.56328255e-01   6.39058416e-01]
 [  8.59151061e-01   1.40807092e-01   4.18472484e-05]
 [  2.11461639e-02   7.24592757e-01   2.54261079e-01]]
Test classes:  [0 1 2]
[[0 0 1]
 [0 0 1]
 [0 0 1]
 [1 0 0]
 [0 1 0]]
Average ROC area:  0.975555555556

In [ ]: