HW 3: KNN & Random Forest

Get your data here. The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed. There are four datasets:

1) bank-additional-full.csv with all examples (41188) and 20 inputs, ordered by date (from May 2008 to November 2010)

2) bank-additional.csv with 10% of the examples (4119), randomly selected from 1), and 20 inputs.

3) bank-full.csv with all examples and 17 inputs, ordered by date (older version of this dataset with less inputs).

4) bank.csv with 10% of the examples and 17 inputs, randomly selected from 3 (older version of this dataset with less inputs).

The smallest datasets are provided to test more computationally demanding machine learning algorithms (e.g., SVM).

The classification goal is to predict if the client will subscribe (yes/no) a term deposit (variable y).

Assignment

  • Preprocess your data (you may find LabelEncoder useful)
  • Train both KNN and Random Forest models
  • Find the best parameters by computing their learning curve (feel free to verify this with grid search)
  • Create a clasification report
  • Inspect your models, what features are most important? How might you use this information to improve model precision?

In [ ]:


In [1]:
# Standard imports for data analysis packages in Python
import pandas as pd
import numpy as np
#import seaborn as sns  # for pretty layout of plots
import matplotlib.pyplot as plt
from pprint import pprint  # for pretty printing

# This enables inline Plots
%matplotlib inline

# Limit rows displayed in notebook
pd.set_option('display.max_rows', 10)
pd.set_option('display.precision', 2)

In [2]:
dataset = pd.read_csv("/Users/arthurconner/Documents/DataScience/classwork/bank-additional/bank-additional-full.csv")
dataset.head(2)


Out[2]:
age;"job";"marital";"education";"default";"housing";"loan";"contact";"month";"day_of_week";"duration";"campaign";"pdays";"previous";"poutcome";"emp.var.rate";"cons.price.idx";"cons.conf.idx";"euribor3m";"nr.employed";"y"
0 56;"housemaid";"married";"basic.4y";"no";"no";...
1 57;"services";"married";"high.school";"unknown...

In [5]:
?pd.read_csv

In [3]:
dataset = pd.read_csv("/Users/arthurconner/Documents/DataScience/classwork/bank-additional/bank-additional-full.csv",delimiter=";")
dataset.head(2)


Out[3]:
age job marital education default housing loan contact month day_of_week ... campaign pdays previous poutcome emp.var.rate cons.price.idx cons.conf.idx euribor3m nr.employed y
0 56 housemaid married basic.4y no no no telephone may mon ... 1 999 0 nonexistent 1.1 94 -36.4 4.9 5191 no
1 57 services married high.school unknown no no telephone may mon ... 1 999 0 nonexistent 1.1 94 -36.4 4.9 5191 no

2 rows × 21 columns


In [4]:
dataset.info()


<class 'pandas.core.frame.DataFrame'>
Int64Index: 41188 entries, 0 to 41187
Data columns (total 21 columns):
age               41188 non-null int64
job               41188 non-null object
marital           41188 non-null object
education         41188 non-null object
default           41188 non-null object
housing           41188 non-null object
loan              41188 non-null object
contact           41188 non-null object
month             41188 non-null object
day_of_week       41188 non-null object
duration          41188 non-null int64
campaign          41188 non-null int64
pdays             41188 non-null int64
previous          41188 non-null int64
poutcome          41188 non-null object
emp.var.rate      41188 non-null float64
cons.price.idx    41188 non-null float64
cons.conf.idx     41188 non-null float64
euribor3m         41188 non-null float64
nr.employed       41188 non-null float64
y                 41188 non-null object
dtypes: float64(5), int64(5), object(11)
memory usage: 6.9+ MB

In [5]:
from sklearn.preprocessing import LabelEncoder

In [6]:
nondata = ["job","marital","education","default","housing","loan","contact","poutcome","y"]
transformLabels = []
encoders = {}
for x in nondata:
    nextlabel = x + "_encoded"
    transformLabels.append(nextlabel)
    le = LabelEncoder()
    encoders[x] = le
    le.fit(dataset[x].unique())
    dataset[nextlabel] = dataset[x].map(le.transform)
    base =  dataset[nextlabel].unique()
    print nextlabel , base, "<-->", x, le.inverse_transform(base)

encoders


job_encoded [ 3  7  0  1  9  5  4 10  6 11  2  8] <--> job ['housemaid' 'services' 'admin.' 'blue-collar' 'technician' 'retired'
 'management' 'unemployed' 'self-employed' 'unknown' 'entrepreneur'
 'student']
marital_encoded [1 2 0 3] <--> marital ['married' 'single' 'divorced' 'unknown']
education_encoded [0 3 1 2 5 7 6 4] <--> education ['basic.4y' 'high.school' 'basic.6y' 'basic.9y' 'professional.course'
 'unknown' 'university.degree' 'illiterate']
default_encoded [0 1 2] <--> default ['no' 'unknown' 'yes']
housing_encoded [0 2 1] <--> housing ['no' 'yes' 'unknown']
loan_encoded [0 2 1] <--> loan ['no' 'yes' 'unknown']
contact_encoded [1 0] <--> contact ['telephone' 'cellular']
poutcome_encoded [1 0 2] <--> poutcome ['nonexistent' 'failure' 'success']
y_encoded [0 1] <--> y ['no' 'yes']
Out[6]:
{'contact': LabelEncoder(),
 'default': LabelEncoder(),
 'education': LabelEncoder(),
 'housing': LabelEncoder(),
 'job': LabelEncoder(),
 'loan': LabelEncoder(),
 'marital': LabelEncoder(),
 'poutcome': LabelEncoder(),
 'y': LabelEncoder()}

In [7]:
numeric = ["duration", "campaign", "pdays","previous","emp.var.rate","cons.price.idx","cons.conf.idx","euribor3m","nr.employed"]
xlabels = []
for x in transformLabels:
    xlabels.append(x)
yLabel = xlabels.pop()
for x in numeric:
    xlabels.append(x)
xlabels


Out[7]:
['job_encoded',
 'marital_encoded',
 'education_encoded',
 'default_encoded',
 'housing_encoded',
 'loan_encoded',
 'contact_encoded',
 'poutcome_encoded',
 'duration',
 'campaign',
 'pdays',
 'previous',
 'emp.var.rate',
 'cons.price.idx',
 'cons.conf.idx',
 'euribor3m',
 'nr.employed']

In [8]:
xData = dataset[xlabels]
xData


Out[8]:
job_encoded marital_encoded education_encoded default_encoded housing_encoded loan_encoded contact_encoded poutcome_encoded duration campaign pdays previous emp.var.rate cons.price.idx cons.conf.idx euribor3m nr.employed
0 3 1 0 0 0 0 1 1 261 1 999 0 1.1 94.0 -36.4 4.9 5191.0
1 7 1 3 1 0 0 1 1 149 1 999 0 1.1 94.0 -36.4 4.9 5191.0
2 7 1 3 0 2 0 1 1 226 1 999 0 1.1 94.0 -36.4 4.9 5191.0
3 0 1 1 0 0 0 1 1 151 1 999 0 1.1 94.0 -36.4 4.9 5191.0
4 7 1 3 0 0 2 1 1 307 1 999 0 1.1 94.0 -36.4 4.9 5191.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
41183 5 1 5 0 2 0 0 1 334 1 999 0 -1.1 94.8 -50.8 1.0 4963.6
41184 1 1 5 0 0 0 0 1 383 1 999 0 -1.1 94.8 -50.8 1.0 4963.6
41185 5 1 6 0 2 0 0 1 189 2 999 0 -1.1 94.8 -50.8 1.0 4963.6
41186 9 1 5 0 0 0 0 1 442 1 999 0 -1.1 94.8 -50.8 1.0 4963.6
41187 5 1 5 0 2 0 0 0 239 3 999 1 -1.1 94.8 -50.8 1.0 4963.6

41188 rows × 17 columns


In [9]:
yData = dataset[yLabel]
yData


Out[9]:
0    0
1    0
2    0
...
41185    0
41186    1
41187    0
Name: y_encoded, Length: 41188, dtype: int64

In [10]:
#Scikit Imports
from sklearn import linear_model, tree, metrics, naive_bayes, ensemble, cross_validation, grid_search
X_train, X_test, y_train, y_test = cross_validation.train_test_split(xData, yData, random_state=12, test_size=0.2)
print X_train.shape, X_test.shape, y_train.shape, y_test.shape


(32950, 17) (8238, 17) (32950,) (8238,)

In [11]:
from sklearn.neighbors import KNeighborsClassifier 
from sklearn.ensemble import RandomForestClassifier

In [12]:
#out of the box
classifiers = [KNeighborsClassifier,RandomForestClassifier]
for cl in classifiers:
    classif = cl()
    classif.fit(X_train,y_train)
    print cl,classif.score(X_test,y_test)


<class 'sklearn.neighbors.classification.KNeighborsClassifier'> 0.902403495994
<class 'sklearn.ensemble.forest.RandomForestClassifier'> 0.907623209517

In [13]:
#okay knn is its own separate beast
#lets try a simple one
knnParams = {"n_neighbors":[1,2,5,10,20,50]}
kgrid = grid_search.GridSearchCV(KNeighborsClassifier(), knnParams)
kgrid.fit(X_train, y_train)
print kgrid.score(X_test,y_test)
print kgrid.best_params_


0.914056809905
{'n_neighbors': 50}

In [14]:
knnParams = {"n_neighbors":[50,100,250,500]}
kgrid = grid_search.GridSearchCV(KNeighborsClassifier(), knnParams)
kgrid.fit(X_train, y_train)
print kgrid.score(X_test,y_test)
print kgrid.best_params_
knnBest = kgrid.best_estimator_


0.914785142025
{'n_neighbors': 100}

In [15]:
# Find the best parameters by computing their learning curve (feel free to verify this with grid search)
import matplotlib as mpl
import matplotlib.pyplot as plt

In [ ]:
from sklearn.pipeline import make_pipeline
?np.arange

In [18]:
neigbors = np.arange(5, 80,5)
training_error = []
test_error = []
mse = metrics.mean_squared_error


for neigh in neigbors:
    model = KNeighborsClassifier(n_neighbors=neigh)
    model.fit(X_train, y_train)
    training_error.append(mse(model.predict(X_train), y_train))
    test_error.append(mse(model.predict(X_test), y_test))
    
# note that the test error can also be computed via cross-validation
plt.plot(neigbors, training_error, label='training')
plt.plot(neigbors, test_error, label='test')
plt.legend()
plt.xlabel('neigbors')
plt.ylabel('MSE')


Out[18]:
<matplotlib.text.Text at 0x10b8c6590>

In [19]:
#Create a clasification report
from sklearn.metrics import classification_report

In [20]:
#It looks like it levels out around 40

knnModel = KNeighborsClassifier(n_neighbors=40)
knnModel.fit(X_train, y_train)
    
y_true = y_test
y_pred = knnModel.predict(X_test)
print(classification_report(y_true, y_pred))


             precision    recall  f1-score   support

          0       0.94      0.97      0.95      7316
          1       0.65      0.49      0.56       922

avg / total       0.91      0.91      0.91      8238


In [22]:
from sklearn.cross_validation import KFold
num_rows = yData.shape[0]
y = np.zeros((num_rows))

kf = KFold(num_rows, n_folds=5)
y_pred = y * 0
for train, test in kf:
   
    X_train = xData.values[train,:]
    X_test = xData.values[test,:]
    y_train = yData.values[train]
    y_test = yData.values[test]
    #, X_test, y_train, y_test , xData[test,:], yData[train], yData[test]
    clf = KNeighborsClassifier(n_neighbors=40)
    #clf = SVC(kernel='rbf', class_weight='auto',verbose=3)
    clf.fit(X_train, y_train)
    y_pred[test] = clf.predict(X_test)

#print("done in %0.3fs" % (time() - t0))
print(classification_report(yData.values, y_pred))


             precision    recall  f1-score   support

          0       0.92      0.98      0.95     36548
          1       0.63      0.30      0.41      4640

avg / total       0.88      0.90      0.89     41188


In [23]:
from sklearn.metrics import roc_curve
from sklearn.metrics import auc

def plot_roc_curve(target_test, target_predicted_proba):
    fpr, tpr, thresholds = roc_curve(target_test, target_predicted_proba)
    roc_auc = auc(fpr, tpr)
    
    # Plot ROC curve
    plt.plot(fpr, tpr, label='ROC curve (area = %0.3f)' % roc_auc)
    plt.plot([0, 1], [0, 1], 'k--')  # random predictions curve
    plt.xlim([0.0, 1.0])
    plt.ylim([0.0, 1.0])
    plt.xlabel('False Positive Rate or (1 - Specifity)')
    plt.ylabel('True Positive Rate or (Sensitivity)')
    plt.title('Receiver Operating Characteristic')
    plt.legend(loc="lower right")

In [24]:
plot_roc_curve(yData.values, y_pred)
y_pred.shape


Out[24]:
(41188,)

In [26]:
num_rows = yData.shape[0]
y = np.zeros((num_rows))

kf = KFold(num_rows, n_folds=5)
y_pred = y * 0
for train, test in kf:
   
    X_train = xData.values[train,:]
    X_test = xData.values[test,:]
    y_train = yData.values[train]
    y_test = yData.values[test]
    #, X_test, y_train, y_test , xData[test,:], yData[train], yData[test]
    clf = KNeighborsClassifier(n_neighbors=10)
    #clf = SVC(kernel='rbf', class_weight='auto',verbose=3)
    clf.fit(X_train, y_train)
    y_pred[test] = clf.predict(X_test)

#print("done in %0.3fs" % (time() - t0))
print(classification_report(yData.values, y_pred))
plot_roc_curve(yData.values, y_pred)
y_pred.shape


             precision    recall  f1-score   support

          0       0.91      0.98      0.94     36548
          1       0.62      0.24      0.35      4640

avg / total       0.88      0.90      0.88     41188

Out[26]:
(41188,)

#40 was beter


In [27]:
#randomForsest

treeSizes = np.arange(5, 80,5)
training_error = []
test_error = []
mse = metrics.mean_squared_error

for treeSize in treeSizes:
    model = RandomForestClassifier(n_estimators=treeSize)
    model.fit(X_train, y_train)
    training_error.append(mse(model.predict(X_train), y_train))
    test_error.append(mse(model.predict(X_test), y_test))
    
# note that the test error can also be computed via cross-validation
plt.plot(treeSizes, training_error, label='training')
plt.plot(treeSizes, test_error, label='test')
plt.legend()
plt.xlabel('Tree Sizes')
plt.ylabel('MSE')


Out[27]:
<matplotlib.text.Text at 0x10a206810>

In [28]:
#lets zoom
training_error = []
test_error = []
mse = metrics.mean_squared_error

for treeSize in treeSizes:
    model = RandomForestClassifier(n_estimators=treeSize)
    model.fit(X_train, y_train)
    training_error.append(mse(model.predict(X_train), y_train)*10)
    test_error.append(mse(model.predict(X_test), y_test))
    
# note that the test error can also be computed via cross-validation
plt.plot(treeSizes, training_error, label='training * 10')
plt.plot(treeSizes, test_error, label='test')
plt.legend()
plt.xlabel('Tree Sizes')
plt.ylabel('MSE')


Out[28]:
<matplotlib.text.Text at 0x10c6a7f90>

In [29]:
#it doesn't look like you gain much past 40
treeModel = RandomForestClassifier(n_estimators=40)
treeModel.fit(X_train, y_train)
    
y_true = y_test
y_pred = treeModel.predict(X_test)
print(classification_report(y_true, y_pred))


             precision    recall  f1-score   support

          0       0.71      0.98      0.82      5697
          1       0.69      0.10      0.18      2540

avg / total       0.70      0.71      0.62      8237


In [30]:
#treeModel.feature_importances_

labels = []
'''
for i in range(0,X_train.size):
    desc = xLabels[i]
    val = treeModel.feature_importances_[i]'''
sorted(zip(treeModel.feature_importances_, xlabels), reverse=True)


Out[30]:
[(0.48631183535186012, 'duration'),
 (0.11847951848887675, 'euribor3m'),
 (0.074231561058483608, 'job_encoded'),
 (0.067134421623104734, 'campaign'),
 (0.064304197039605965, 'education_encoded'),
 (0.036468564935170568, 'marital_encoded'),
 (0.027250680516874043, 'housing_encoded'),
 (0.023606615027988137, 'cons.conf.idx'),
 (0.021415948315049178, 'cons.price.idx'),
 (0.018669178631437012, 'loan_encoded'),
 (0.017975940146505678, 'default_encoded'),
 (0.0089512349909321512, 'contact_encoded'),
 (0.0087305468712896902, 'poutcome_encoded'),
 (0.0079248570405251843, 'emp.var.rate'),
 (0.0066932963580999842, 'nr.employed'),
 (0.00613727644552047, 'previous'),
 (0.0057143271586768737, 'pdays')]

In [91]:


In [31]:
num_rows = yData.shape[0]
y = np.zeros((num_rows))

kf = KFold(num_rows, n_folds=5)
y_pred = y * 0
for train, test in kf:
   
    X_train = xData.values[train,:]
    X_test = xData.values[test,:]
    y_train = yData.values[train]
    y_test = yData.values[test]
    #, X_test, y_train, y_test , xData[test,:], yData[train], yData[test]
    clf = RandomForestClassifier(n_estimators=40, n_jobs=3)
    #clf = SVC(kernel='rbf', class_weight='auto',verbose=3)
    clf.fit(X_train, y_train)
    y_pred[test] = clf.predict(X_test)
    '''
    print test
    '''
#print("done in %0.3fs" % (time() - t0))
print(classification_report(yData.values, y_pred))


             precision    recall  f1-score   support

          0       0.90      0.98      0.94     36548
          1       0.53      0.18      0.27      4640

avg / total       0.86      0.89      0.86     41188


In [32]:
plot_roc_curve(yData.values, y_pred)



In [33]:
roc_curve(yData.values, y_pred)


Out[33]:
(array([ 0.        ,  0.01980957,  1.        ]),
 array([ 0.        ,  0.17715517,  1.        ]),
 array([ 2.,  1.,  0.]))