We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)


In [1]:
import pandas as pd
%matplotlib inline
from sklearn import datasets
from pandas.tools.plotting import scatter_matrix

1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)


In [3]:
iris = datasets.load_iris()

In [4]:
x = iris.data[:,2:] # the attributes
y = iris.target # the target variable

In [5]:
from sklearn import tree

In [6]:
dt = tree.DecisionTreeClassifier()

In [7]:
dt = dt.fit(x,y)

In [10]:
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5)
dt = dt.fit(x_train,y_train)

In [11]:
from sklearn import metrics
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
    y_pred=clf.predict(X)
    if show_accuracy:
        print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
    if show_classification_report:
        print("Classification report")
        print(metrics.classification_report(y,y_pred),"\n")
    if show_confussion_matrix:
        print("Confusion matrix")
        print(metrics.confusion_matrix(y,y_pred),"\n")

In [12]:
measure_performance(x_test,y_test,dt)

#Seems pretty good? Only two are wrong?


Accuracy:0.973 

Classification report
             precision    recall  f1-score   support

          0       1.00      1.00      1.00        22
          1       0.96      0.96      0.96        25
          2       0.96      0.96      0.96        28

avg / total       0.97      0.97      0.97        75
 

Confusion matrix
[[22  0  0]
 [ 0 24  1]
 [ 0  1 27]] 

2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.


In [13]:
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)
dt = dt.fit(x_train,y_train)
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
    y_pred=clf.predict(X)
    if show_accuracy:
        print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
    if show_classification_report:
        print("Classification report")
        print(metrics.classification_report(y,y_pred),"\n")
    if show_confussion_matrix:
        print("Confusion matrix")
        print(metrics.confusion_matrix(y,y_pred),"\n")

In [14]:
measure_performance(x_test,y_test,dt)

#This seems better, only one is wrong


Accuracy:0.974 

Classification report
             precision    recall  f1-score   support

          0       1.00      1.00      1.00        13
          1       1.00      0.94      0.97        16
          2       0.90      1.00      0.95         9

avg / total       0.98      0.97      0.97        38
 

Confusion matrix
[[13  0  0]
 [ 0 15  1]
 [ 0  0  9]] 

3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?

For context of the data, see the documentation here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29


In [16]:
cancer = datasets.load_breast_cancer()

4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.


In [17]:
x = cancer.data[:,2:]
y = cancer.target

In [19]:
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)

In [20]:
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5)
dt = dt.fit(x_train,y_train)

In [21]:
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
    y_pred=clf.predict(X)
    if show_accuracy:
        print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
    if show_classification_report:
        print("Classification report")
        print(metrics.classification_report(y,y_pred),"\n")
    if show_confussion_matrix:
        print("Confusion matrix")
        print(metrics.confusion_matrix(y,y_pred),"\n")

In [22]:
measure_performance(x_test,y_test,dt)

#Not...great


Accuracy:0.923 

Classification report
             precision    recall  f1-score   support

          0       0.84      0.96      0.90       101
          1       0.98      0.90      0.94       184

avg / total       0.93      0.92      0.92       285
 

Confusion matrix
[[ 97   4]
 [ 18 166]] 


In [23]:
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)
dt = dt.fit(x_train,y_train)

In [24]:
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
    y_pred=clf.predict(X)
    if show_accuracy:
        print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
    if show_classification_report:
        print("Classification report")
        print(metrics.classification_report(y,y_pred),"\n")
    if show_confussion_matrix:
        print("Confusion matrix")
        print(metrics.confusion_matrix(y,y_pred),"\n")

In [25]:
measure_performance(x_test,y_test,dt)

#This seems better


Accuracy:0.930 

Classification report
             precision    recall  f1-score   support

          0       0.88      0.92      0.90        50
          1       0.96      0.94      0.95        93

avg / total       0.93      0.93      0.93       143
 

Confusion matrix
[[46  4]
 [ 6 87]] 


In [ ]: