HW 3: KNN & Random Forest

Get your data here. The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed. There are four datasets:

1) bank-additional-full.csv with all examples (41188) and 20 inputs, ordered by date (from May 2008 to November 2010)

2) bank-additional.csv with 10% of the examples (4119), randomly selected from 1), and 20 inputs.

3) bank-full.csv with all examples and 17 inputs, ordered by date (older version of this dataset with less inputs).

4) bank.csv with 10% of the examples and 17 inputs, randomly selected from 3 (older version of this dataset with less inputs).

The smallest datasets are provided to test more computationally demanding machine learning algorithms (e.g., SVM).

The classification goal is to predict if the client will subscribe (yes/no) a term deposit (variable y).

Assignment

  • Preprocess your data (you may find LabelEncoder useful)
  • Train both KNN and Random Forest models
  • Find the best parameters by computing their learning curve (feel free to verify this with grid search)
  • Create a clasification report
  • Inspect your models, what features are most important? How might you use this information to improve model precision?

In [15]:
# Standard imports
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import preprocessing
%matplotlib inline

from sklearn import neighbors
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.cross_validation import cross_val_score
from sklearn.ensemble import RandomForestClassifier as RF
from sklearn.tree import DecisionTreeClassifier
from sklearn import cross_validation
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from sklearn import svm, grid_search, datasets

import numpy as np
import matplotlib.pyplot as plt
from sklearn.learning_curve import learning_curve

In [16]:
#HI! I'm sorry I didn't complete the HW... this is what I did before class today. It was a busy week :(

In [17]:
bank_full = pd.read_csv('.//bank/bank-full.csv', sep=";")
bank = pd.read_csv('.//bank/bank.csv', sep=";")
bank_add = pd.read_csv('.//bank-additional/bank-additional.csv', sep=";")
bank_add_full = pd.read_csv('.//bank-additional/bank-additional-full.csv', sep=";")

In [18]:
bank_data = pd.DataFrame()
label_encoders = {}

for column in bank_add.columns:
    if bank_add[column].dtype == 'object':
        label_encoders[column] = preprocessing.LabelEncoder()
        bank_data[column] = label_encoders[column].fit_transform(bank_add[column])
    else:
        bank_data[column] = bank_add[column]

In [19]:
xcols = [col for col in bank_data.columns if col != 'y']
X = bank_data[xcols].values
y = bank_data['y'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)

In [20]:
from sklearn.metrics import confusion_matrix

def plot_confusion_matrix(y_pred, y):
    plt.imshow(confusion_matrix(y, y_pred),
               cmap=plt.cm.binary, interpolation='nearest')
    plt.colorbar()
    plt.xlabel('true value')
    plt.ylabel('predicted value')

In [21]:
clf = KNeighborsClassifier()

clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)

print classification_report(y_test, y_pred)


             precision    recall  f1-score   support

          0       0.94      0.94      0.94       929
          1       0.45      0.44      0.44       101

avg / total       0.89      0.89      0.89      1030


In [22]:
%%time

# Random Forest
def forest(df):
    # Feature - Target
    xcols = [col for col in df.columns if col != 'y']
    X = df[xcols].values
    y = df['y'].values
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2)
    # Run Model
    clf = RF(n_estimators = 50)
    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)
    # Get results
    print plot_confusion_matrix(y_test, y_pred)
    print classification_report(y_test, y_pred)
forest(bank_data)


None
             precision    recall  f1-score   support

          0       0.94      0.98      0.96       746
          1       0.65      0.44      0.52        78

avg / total       0.92      0.92      0.92       824

CPU times: user 341 ms, sys: 6.26 ms, total: 348 ms
Wall time: 352 ms

In [23]:
def knn(df):  
    X = df.drop('y', axis = 1)
    y = df.y.values
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2)
    
    knn = neighbors.KNeighborsClassifier(n_neighbors=1)
    knn.fit(X_train, y_train)
    y_pred = knn.predict(X_test)
    
    print y_pred.shape
    print knn.score(X_test, y_test)
    print plot_confusion_matrix(y_test, y_pred)
    print classification_report(y_test, y_pred)

knn(bank_data)


(824,)
0.890776699029
None
             precision    recall  f1-score   support

          0       0.94      0.94      0.94       735
          1       0.49      0.49      0.49        89

avg / total       0.89      0.89      0.89       824


In [24]:
def plot_learning_curve(estimator, title, X, y,n_estimators, n_jobs=1, cv=None):

    plt.figure()
    plt.title(title)
    plt.xlabel("N Neighbors")
    plt.ylabel("Score")
    score=[]
    n_est=[]
    for i in n_estimators:
        clf = KNeighborsClassifier(n_neighbors=int(i))
        score.append(cross_val_score(clf, X_train, y_train).mean())
        n_est.append(i)
    print score , n_estimators
    
    plt.grid()

    plt.plot(n_est, score, 'o-', color="r",
             label="Training score")
    

    plt.legend(loc="best")
    return plt
    

    
    

    

estimator=KNeighborsClassifier()
title="Learning Curve: K Nearest Neighbors"
X=X_train
y=y_train
plot_learning_curve(estimator, title, X, y, n_jobs=1, cv=None,n_estimators=np.linspace(1, 100, 10))


[0.89284912300565156, 0.90935680791040419, 0.90806168052056702, 0.90935869493428445, 0.90903412682687501, 0.90903444133085509, 0.90838656313195643, 0.9080632530404672, 0.90903412682687501, 0.90903412682687501] [   1.   12.   23.   34.   45.   56.   67.   78.   89.  100.]
Out[24]:
<module 'matplotlib.pyplot' from '/Users/megara/anaconda/lib/python2.7/site-packages/matplotlib/pyplot.pyc'>

In [25]:
def plot_learning_curve(estimator, title, X, y,n_estimators, n_jobs=1, cv=None):

    plt.figure()
    plt.title(title)
    plt.xlabel("Leaf Size")
    plt.ylabel("Score")
    score=[]
    n_est=[]
    for i in n_estimators:
        clf = KNeighborsClassifier(leaf_size=int(i))
        score.append(cross_val_score(clf, X_train, y_train).mean())
        n_est.append(i)
    print score , n_estimators
    
    plt.grid()

    plt.plot(n_est, score, 'o-', color="r",
             label="Training score")
    

    plt.legend(loc="best")
    return plt
    

    
    

    

estimator=RF()
title="Learning Curve: K Nearest neighbors"
X=X_train
y=y_train
plot_learning_curve(estimator, title, X, y, n_jobs=1, cv=None,n_estimators=np.linspace(1, 100, 10))


[0.90579725186422222, 0.90579725186422222, 0.90579725186422222, 0.90579725186422222, 0.90579725186422222, 0.90579725186422222, 0.90579725186422222, 0.90579725186422222, 0.90579725186422222, 0.90579725186422222] [   1.   12.   23.   34.   45.   56.   67.   78.   89.  100.]
Out[25]:
<module 'matplotlib.pyplot' from '/Users/megara/anaconda/lib/python2.7/site-packages/matplotlib/pyplot.pyc'>

In [26]:
def plot_learning_curve(estimator, title, X, y,n_estimators, n_jobs=1, cv=None):

    plt.figure()
    plt.title(title)
    plt.xlabel("N estimators")
    plt.ylabel("Score")
    score=[]
    n_est=[]
    for i in n_estimators:
        clf = RF(n_estimators=int(i))
        score.append(cross_val_score(clf, X_train, y_train).mean())
        n_est.append(i)
    print score , n_estimators
    
    plt.grid()

    plt.plot(n_est, score, 'o-', color="r",
             label="Training score")
    

    plt.legend(loc="best")
    return plt
    

    
    

    

estimator=RF()
title="Learning Curve: Random Forests"
X=X_train
y=y_train
plot_learning_curve(estimator, title, X, y, n_jobs=1, cv=None,n_estimators=np.linspace(1, 100, 10))


[0.87957516802375135, 0.90417441132717535, 0.90870830070354547, 0.90611898943581126, 0.90708829070231889, 0.90773805592509771, 0.9067659241227698, 0.9074131733137083, 0.91032673818487175, 0.9087092442154856] [   1.   12.   23.   34.   45.   56.   67.   78.   89.  100.]
Out[26]:
<module 'matplotlib.pyplot' from '/Users/megara/anaconda/lib/python2.7/site-packages/matplotlib/pyplot.pyc'>

In [28]:
def plot_learning_curve(estimator, title, X, y,n_estimators, n_jobs=1, cv=None):

    plt.figure()
    plt.title(title)
    plt.xlabel("Max Depth")
    plt.ylabel("Score")
    score=[]
    n_est=[]
    for i in n_estimators:
        clf = RF(max_depth=int(i))
        score.append(cross_val_score(clf, X_train, y_train).mean())
        n_est.append(i)
    print score , n_estimators
    
    plt.grid()

    plt.plot(n_est, score, 'o-', color="r",
             label="Training score")
    

    plt.legend(loc="best")
    return plt
    

    
    

    

estimator=RF()
title="Learning Curve: Random Forests"
X=X_train
y=y_train
plot_learning_curve(estimator, title, X, y, n_jobs=1, cv=None,n_estimators=np.linspace(1, 1000, 10))


[0.88701853371954409, 0.90870892971150552, 0.9074131733137083, 0.90353156519195765, 0.90449803592264466, 0.90806136601658693, 0.90741065728186798, 0.91064879026044077, 0.90579536484034195, 0.90514685763348324] [    1.   112.   223.   334.   445.   556.   667.   778.   889.  1000.]
Out[28]:
<module 'matplotlib.pyplot' from '/Users/megara/anaconda/lib/python2.7/site-packages/matplotlib/pyplot.pyc'>

In [29]:
parameters = {'n_estimators':[1, 100]}
random = RF()
clf = grid_search.GridSearchCV(random, parameters)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)

In [30]:
# Random Forest
def forest(df):
    # Feature - Target
    xcols = [col for col in df.columns if col != 'y']
    X = df[xcols].values
    y = df['y'].values
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2)
    # Run Model
    parameters = {'n_estimators':[1,100], 'max_depth':[1,1000]}
    random = RF()
    clf = grid_search.GridSearchCV(random, parameters)
    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)
    # Get results
    print confusion_matrix(y_test, y_pred)
    print classification_report(y_test, y_pred)
forest(bank_data)


[[712  20]
 [ 64  28]]
             precision    recall  f1-score   support

          0       0.92      0.97      0.94       732
          1       0.58      0.30      0.40        92

avg / total       0.88      0.90      0.88       824


In [31]:
def knn(df):  
    X = df.drop('y', axis = 1)
    y = df.y.values
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2)
    parameters = {'n_neighbors':[1,100],'weights':('distance', 'uniform'),'algorithm':('auto', 'ball_tree', 'kd_tree', 'brute'), 'leaf_size':[1,100]}
    knn = neighbors.KNeighborsClassifier()
    clf = grid_search.GridSearchCV(knn, parameters)
    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)
    
    print y_pred.shape
    print clf.score(X_test, y_test)
    print confusion_matrix(y_test, y_pred)
    print classification_report(y_test, y_pred)

knn(bank_data)


(824,)
0.902912621359
[[704  19]
 [ 61  40]]
             precision    recall  f1-score   support

          0       0.92      0.97      0.95       723
          1       0.68      0.40      0.50       101

avg / total       0.89      0.90      0.89       824


In [32]:
# This prints the top 10 most important features 
clf = RF(n_estimators=20)
clf.fit(X_train,y_train)
important=sorted(zip(clf.feature_importances_, xcols), reverse=True)[:10]
important


Out[32]:
[(0.29121690076572976, 'duration'),
 (0.13211977990833673, 'euribor3m'),
 (0.080991808813905483, 'nr.employed'),
 (0.074646149603671941, 'age'),
 (0.044594731672786041, 'job'),
 (0.041913190543369984, 'education'),
 (0.036723507316899363, 'campaign'),
 (0.036035598291393071, 'day_of_week'),
 (0.033842355104980015, 'pdays'),
 (0.032747770866925915, 'cons.conf.idx')]

In [33]:
#let's grab those features as a list
X_import=[]
for i in range(len(important)):
    X_import.append(important[i][1])
X_import


Out[33]:
['duration',
 'euribor3m',
 'nr.employed',
 'age',
 'job',
 'education',
 'campaign',
 'day_of_week',
 'pdays',
 'cons.conf.idx']

In [34]:
# Random Forest using only top ten features
def forest(df):
    # Feature - Target
    xcols = [col for col in df.columns if col != 'y']
    X = df[X_import].values
    y = df['y'].values
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2)
    # Run Model
    parameters = {'n_estimators':[1,100], 'max_depth':[1,1000]}
    random = RF()
    clf = grid_search.GridSearchCV(random, parameters)
    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)
    # Get results
    plot_confusion_matrix(y_test, y_pred)
    print classification_report(y_test, y_pred)
forest(bank_data)


             precision    recall  f1-score   support

          0       0.93      0.97      0.95       724
          1       0.68      0.46      0.55       100

avg / total       0.90      0.91      0.90       824


In [35]:
#knn using only top ten features
def knn(df):  
    X = df[X_import].values
    y = df.y.values
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2)
    parameters = {'n_neighbors':[1,100],'weights':('distance', 'uniform'),'algorithm':('auto', 'ball_tree', 'kd_tree', 'brute'), 'leaf_size':[1,100]}
    knn = neighbors.KNeighborsClassifier()
    clf = grid_search.GridSearchCV(knn, parameters)
    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)
    
    print y_pred.shape
    print clf.score(X_test, y_test)
    print plot_confusion_matrix(y_test, y_pred)
    print classification_report(y_test, y_pred)

knn(bank_data)


(824,)
0.910194174757
None
             precision    recall  f1-score   support

          0       0.93      0.97      0.95       734
          1       0.63      0.43      0.51        90

avg / total       0.90      0.91      0.90       824


In [37]:
#top 5?
X_import5=[]
for i in range(4):
    X_import5.append(important[i][1])
X_import5


Out[37]:
['duration', 'euribor3m', 'nr.employed', 'age']

In [39]:
# Random Forest using only top five features
def forest(df):
    # Feature - Target
    xcols = [col for col in df.columns if col != 'y']
    X = df[X_import5].values
    y = df['y'].values
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2)
    # Run Model
    parameters = {'n_estimators':[1,100], 'max_depth':[1,1000]}
    random = RF()
    clf = grid_search.GridSearchCV(random, parameters)
    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)
    # Get results
    plot_confusion_matrix(y_test, y_pred)
    print classification_report(y_test, y_pred)
forest(bank_data)


             precision    recall  f1-score   support

          0       0.94      0.96      0.95       736
          1       0.56      0.45      0.50        88

avg / total       0.90      0.90      0.90       824


In [40]:
#knn using only top five features
def knn(df):  
    X = df[X_import5].values
    y = df.y.values
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2)
    parameters = {'n_neighbors':[1,100],'weights':('distance', 'uniform'),'algorithm':('auto', 'ball_tree', 'kd_tree', 'brute'), 'leaf_size':[1,100]}
    knn = neighbors.KNeighborsClassifier()
    clf = grid_search.GridSearchCV(knn, parameters)
    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)
    
    print y_pred.shape
    print clf.score(X_test, y_test)
    print plot_confusion_matrix(y_test, y_pred)
    print classification_report(y_test, y_pred)

knn(bank_data)


(824,)
0.91140776699
None
             precision    recall  f1-score   support

          0       0.93      0.97      0.95       733
          1       0.65      0.43      0.52        91

avg / total       0.90      0.91      0.90       824


In [38]:
from sklearn.metrics import roc_curve
from sklearn.metrics import auc

def plot_roc_curve(target_test, target_predicted_proba):
    fpr, tpr, thresholds = roc_curve(target_test, target_predicted_proba[:, 1
                                                                         ])
    roc_auc = auc(fpr, tpr)
    
    # Plot ROC curve
    plt.plot(fpr, tpr, label='ROC curve (area = %0.3f)' % roc_auc)
    plt.plot([0, 1], [0, 1], 'k--')  # random predictions curve
    plt.xlim([0.0, 1.0])
    plt.ylim([0.0, 1.0])
    plt.xlabel('False Positive Rate or (1 - Specifity)')
    plt.ylabel('True Positive Rate or (Sensitivity)')
    plt.title('Receiver Operating Characteristic')
    plt.legend(loc="lower right")
    
y_predicted_proba = clf.predict_proba(X_test)
plot_roc_curve(y_test, y_predicted_proba)



In [ ]: