Classification of Iris Data

In this example we will perform classification of the iris data with several different classifiers.

KNeighbors Classifier

First we'll load the iris data, one of the toy datasets included in the library


In [ ]:
from sklearn.datasets import load_iris
iris = load_iris()

In the iris dataset example, suppose we are assigned the task to guess the class of an individual flower given the measurements of petals and sepals. This is a classification task, hence we have:


In [ ]:
X, y = iris.data, iris.target

In [ ]:
print(X)

Once the data has this format it is trivial to train a classifier, for instance a k-nearest neighbors classifier:


In [ ]:
from sklearn.neighbors import KNeighborsClassifier

KNeighborsClassifier is an example of a scikit-learn classifier. If you're curious about how it is used, you can use ipython's "?" magic function to see the documentation:


In [ ]:
KNeighborsClassifier?

The first thing to do is to create an instance of the classifier. This can be done simply by calling the class name, with any arguments that the object accepts:


In [ ]:
clf = KNeighborsClassifier()

clf is a statistical model that has parameters that control the learning algorithm (those parameters are sometimes called the hyperparameters). Those hyperparameters can be supplied by the user in the constructor of the model. We will explain later how to choose a good combination using either simple empirical rules or data driven selection:


In [ ]:
clf

By default the model parameters are not initialized. They will be tuned automatically from the data by calling the fit method with the data X and labels y:


In [ ]:
clf.fit(X,y)

Once the model is trained, it can be used to predict the most likely outcome on unseen data. For instance let us define a list of simple sample that looks like the first sample of the iris dataset:


In [ ]:
X_new = [[ 5.0,  3.6,  1.3,  0.25]]

clf.predict(X_new)

Modifying the Parameters of the Classifier

How many neighbors the KNeighborsClassifier use by default? How do you change this number?


In [ ]:

Using a Different Classifier

Now we'll take a few minutes and try out another learning model. Because of scikit-learn's uniform interface, the syntax is identical to that of SVC above.

There are many possibilities of classifiers; you could try any of the methods discussed at http://scikit-learn.org/stable/supervised_learning.html. Alternatively, you can explore what's available in scikit-learn using just the tab-completion feature. For example, import the linear_model submodule:


In [ ]:
from sklearn import linear_model

And use the tab completion to find what's available. Type linear_model. and then the tab key to see an interactive list of the functions within this submodule. The ones which begin with capital letters are the models which are available.


In [ ]:

Now select a new classifier and try out a classification of the iris data.

Some good choices are

  • sklearn.svm.LinearSVC : Support Vector Machines without kernels based on liblinear

  • sklearn.svm.SVC : Support Vector Machines with kernels based on libsvm

  • sklearn.linear_model.LogisticRegression : Regularized Logistic Regression based on liblinear

  • sklearn.linear_model.SGDClassifier : Regularized linear models (SVM or logistic regression) using a Stochastic Gradient Descent algorithm written in Cython

  • sklearn.neighbors.NeighborsClassifier : k-Nearest Neighbors classifier based on the ball tree datastructure for low dimensional data and brute force search for high dimensional data

  • sklearn.naive_bayes.GaussianNB : Gaussian Naive Bayes model. This is an unsophisticated model which can be trained very quickly. It is often used to obtain baseline results before moving to a more sophisticated classifier.

  • sklearn.tree.DecisionTreeClassifier : A classifier based on a series of binary decisions. This is another very fast classifier, which can be very powerful.

Choose one of the above, import it, and use the ? feature to learn about it.


In [ ]:

Now instantiate this model as we did with SVC above. Call it clf2.


In [ ]:

Now use our data X and y to train the model, using clf2.fit(X, y)


In [ ]:

Now call the predict method, and find the classification of X_new.


In [ ]:

Probabilistic Prediction

Some models have additional prediction modes. For example, if clf2 is a LogisticRegression classifier, then it is possible to do a probibilistic prediction for any point. This can be done through the predict_proba function:


In [ ]:
from sklearn.linear_model import LogisticRegression
clf2 = LogisticRegression()
clf2.fit(X, y)
clf2.predict_proba(X_new)

The result gives the probability (between zero and one) that the test point comes from any of the three classes.

This means that the model estimates that the sample in X_new has:

  • 90% likelyhood to belong to the ‘setosa’ class (target = 0)
  • 9% likelyhood to belong to the ‘versicolor’ class (target = 1)
  • 1% likelyhood to belong to the ‘virginica’ class (target = 2)

Of course, the predict method that outputs the label id of the most likely outcome is also available:


In [ ]:
clf2.predict(X_new)