We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)


In [26]:
import pandas as pd
import pydotplus 
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

from sklearn import datasets, tree, metrics
from sklearn.cross_validation import train_test_split
from pandas.tools.plotting import scatter_matrix

1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)


In [27]:
iris = datasets.load_iris()

In [28]:
x = iris.data[:,2:] 
y = iris.target

In [29]:
dt = tree.DecisionTreeClassifier()

In [30]:
# 50%-50%
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.50,train_size=0.50)
dt = dt.fit(x_train,y_train)

In [31]:
def measure_performance(x,y,dt, show_accuracy=True, show_classification_report=True, show_confusion_matrix=True):
    y_pred=dt.predict(x)
    if show_accuracy:
        print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
    if show_classification_report:
        print("Classification report")
        print(metrics.classification_report(y,y_pred),"\n")
    if show_confusion_matrix:
        print("Confusion matrix")
        print(metrics.confusion_matrix(y,y_pred),"\n")

In [32]:
measure_performance(x_test,y_test,dt)


Accuracy:0.973 

Classification report
             precision    recall  f1-score   support

          0       1.00      1.00      1.00        24
          1       0.96      0.96      0.96        24
          2       0.96      0.96      0.96        27

avg / total       0.97      0.97      0.97        75
 

Confusion matrix
[[24  0  0]
 [ 0 23  1]
 [ 0  1 26]] 

2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.


In [33]:
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)

In [34]:
measure_performance(x_train,y_train,dt)


Accuracy:0.982 

Classification report
             precision    recall  f1-score   support

          0       1.00      1.00      1.00        41
          1       0.97      0.97      0.97        37
          2       0.97      0.97      0.97        34

avg / total       0.98      0.98      0.98       112
 

Confusion matrix
[[41  0  0]
 [ 0 36  1]
 [ 0  1 33]] 


In [35]:
measure_performance(x_test,y_test,dt)


Accuracy:1.000 

Classification report
             precision    recall  f1-score   support

          0       1.00      1.00      1.00         9
          1       1.00      1.00      1.00        13
          2       1.00      1.00      1.00        16

avg / total       1.00      1.00      1.00        38
 

Confusion matrix
[[ 9  0  0]
 [ 0 13  0]
 [ 0  0 16]] 

3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?

For context of the data, see the documentation here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29


In [36]:
bc = datasets.load_breast_cancer()

In [37]:
x = bc.data[:,2:]
y = bc.target

In [38]:
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)

In [39]:
dt


Out[39]:
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
            max_features=None, max_leaf_nodes=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            presort=False, random_state=None, splitter='best')

4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.


In [42]:
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.50,train_size=0.50)

In [44]:
measure_performance(x_test,y_test,dt)


Accuracy:1.000 

Classification report
             precision    recall  f1-score   support

          0       1.00      1.00      1.00       105
          1       1.00      1.00      1.00       180

avg / total       1.00      1.00      1.00       285
 

Confusion matrix
[[105   0]
 [  0 180]] 


In [47]:
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)

In [48]:
measure_performance(x_test,y_test,dt)


Accuracy:1.000 

Classification report
             precision    recall  f1-score   support

          0       1.00      1.00      1.00        53
          1       1.00      1.00      1.00        90

avg / total       1.00      1.00      1.00       143
 

Confusion matrix
[[53  0]
 [ 0 90]] 


In [ ]: