HW 3: KNN & Random Forest

Get your data here. The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed. There are four datasets:

1) bank-additional-full.csv with all examples (41188) and 20 inputs, ordered by date (from May 2008 to November 2010)

2) bank-additional.csv with 10% of the examples (4119), randomly selected from 1), and 20 inputs.

3) bank-full.csv with all examples and 17 inputs, ordered by date (older version of this dataset with less inputs).

4) bank.csv with 10% of the examples and 17 inputs, randomly selected from 3 (older version of this dataset with less inputs).

The smallest datasets are provided to test more computationally demanding machine learning algorithms (e.g., SVM).

The classification goal is to predict if the client will subscribe (yes/no) a term deposit (variable y).

Assignment

  • Preprocess your data (you may find LabelEncoder useful)
  • Train both KNN and Random Forest models
  • Find the best parameters by computing their learning curve (feel free to verify this with grid search)
  • Create a clasification report
  • Inspect your models, what features are most important? How might you use this information to improve model precision?

In [1]:
# Standard imports for data analysis packages in Python
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt

# This enables inline Plots
%matplotlib inline

# Limit rows displayed in notebook
pd.set_option('display.max_rows', 10)
pd.set_option('display.precision', 2)

In [2]:
pd.__version__


Out[2]:
'0.14.1'

In [3]:
df = pd.read_csv('bank-additional-full.csv',sep=None)
df.head(5)


C:\Anaconda\lib\site-packages\pandas\io\parsers.py:624: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support sep=None with delim_whitespace=False; you can avoid this warning by specifying engine='python'.
  ParserWarning)
Out[3]:
age job marital education default housing loan contact month day_of_week ... campaign pdays previous poutcome emp.var.rate cons.price.idx cons.conf.idx euribor3m nr.employed y
0 56 housemaid married basic.4y no no no telephone may mon ... 1 999 0 nonexistent 1.1 94 -36.4 4.9 5191 no
1 57 services married high.school unknown no no telephone may mon ... 1 999 0 nonexistent 1.1 94 -36.4 4.9 5191 no
2 37 services married high.school no yes no telephone may mon ... 1 999 0 nonexistent 1.1 94 -36.4 4.9 5191 no
3 40 admin. married basic.6y no no no telephone may mon ... 1 999 0 nonexistent 1.1 94 -36.4 4.9 5191 no
4 56 services married high.school no no yes telephone may mon ... 1 999 0 nonexistent 1.1 94 -36.4 4.9 5191 no

5 rows × 21 columns


In [4]:
df.info()


<class 'pandas.core.frame.DataFrame'>
Int64Index: 41188 entries, 0 to 41187
Data columns (total 21 columns):
age               41188 non-null int64
job               41188 non-null object
marital           41188 non-null object
education         41188 non-null object
default           41188 non-null object
housing           41188 non-null object
loan              41188 non-null object
contact           41188 non-null object
month             41188 non-null object
day_of_week       41188 non-null object
duration          41188 non-null int64
campaign          41188 non-null int64
pdays             41188 non-null int64
previous          41188 non-null int64
poutcome          41188 non-null object
emp.var.rate      41188 non-null float64
cons.price.idx    41188 non-null float64
cons.conf.idx     41188 non-null float64
euribor3m         41188 non-null float64
nr.employed       41188 non-null float64
y                 41188 non-null object
dtypes: float64(5), int64(5), object(11)

In [5]:
'''
# bank client data:
1 - age (numeric)
2 - job : type of job (categorical: 'admin.','blue-collar','entrepreneur','housemaid','management','retired','self-employed','services','student','technician','unemployed','unknown')
3 - marital : marital status (categorical: 'divorced','married','single','unknown'; note: 'divorced' means divorced or widowed)
4 - education (categorical: 'basic.4y','basic.6y','basic.9y','high.school','illiterate','professional.course','university.degree','unknown')
5 - default: has credit in default? (categorical: 'no','yes','unknown')
6 - housing: has housing loan? (categorical: 'no','yes','unknown')
7 - loan: has personal loan? (categorical: 'no','yes','unknown')
# related with the last contact of the current campaign:
8 - contact: contact communication type (categorical: 'cellular','telephone') 
9 - month: last contact month of year (categorical: 'jan', 'feb', 'mar', ..., 'nov', 'dec')
10 - day_of_week: last contact day of the week (categorical: 'mon','tue','wed','thu','fri')
11 - duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target 
     (e.g., if duration=0 then y='no'). Yet, the duration is not known before a call is performed. 
     Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes 
     and should be discarded if the intention is to have a realistic predictive model.
# other attributes:
12 - campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
13 - pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)
14 - previous: number of contacts performed before this campaign and for this client (numeric)
15 - poutcome: outcome of the previous marketing campaign (categorical: 'failure','nonexistent','success')
# social and economic context attributes
16 - emp.var.rate: employment variation rate - quarterly indicator (numeric)
17 - cons.price.idx: consumer price index - monthly indicator (numeric) 
18 - cons.conf.idx: consumer confidence index - monthly indicator (numeric) 
19 - euribor3m: euribor 3 month rate - daily indicator (numeric)
20 - nr.employed: number of employees - quarterly indicator (numeric)

Output variable (desired target):
21 - y - has the client subscribed a term deposit? (binary: 'yes','no')
'''


Out[5]:
"\n# bank client data:\n1 - age (numeric)\n2 - job : type of job (categorical: 'admin.','blue-collar','entrepreneur','housemaid','management','retired','self-employed','services','student','technician','unemployed','unknown')\n3 - marital : marital status (categorical: 'divorced','married','single','unknown'; note: 'divorced' means divorced or widowed)\n4 - education (categorical: 'basic.4y','basic.6y','basic.9y','high.school','illiterate','professional.course','university.degree','unknown')\n5 - default: has credit in default? (categorical: 'no','yes','unknown')\n6 - housing: has housing loan? (categorical: 'no','yes','unknown')\n7 - loan: has personal loan? (categorical: 'no','yes','unknown')\n# related with the last contact of the current campaign:\n8 - contact: contact communication type (categorical: 'cellular','telephone') \n9 - month: last contact month of year (categorical: 'jan', 'feb', 'mar', ..., 'nov', 'dec')\n10 - day_of_week: last contact day of the week (categorical: 'mon','tue','wed','thu','fri')\n11 - duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target \n     (e.g., if duration=0 then y='no'). Yet, the duration is not known before a call is performed. \n     Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes \n     and should be discarded if the intention is to have a realistic predictive model.\n# other attributes:\n12 - campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)\n13 - pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)\n14 - previous: number of contacts performed before this campaign and for this client (numeric)\n15 - poutcome: outcome of the previous marketing campaign (categorical: 'failure','nonexistent','success')\n# social and economic context attributes\n16 - emp.var.rate: employment variation rate - quarterly indicator (numeric)\n17 - cons.price.idx: consumer price index - monthly indicator (numeric) \n18 - cons.conf.idx: consumer confidence index - monthly indicator (numeric) \n19 - euribor3m: euribor 3 month rate - daily indicator (numeric)\n20 - nr.employed: number of employees - quarterly indicator (numeric)\n\nOutput variable (desired target):\n21 - y - has the client subscribed a term deposit? (binary: 'yes','no')\n"

In [6]:
# So, for our purposes,
# 1, 12, 13, 14, 16, 17, 18, 19, 20 are already numeric
# 2, 3, 4, 8, 9, 10 are categorical, and will need to be vectorized in numpy (easier than pandas dummies b/c my pandas is old)
# 5, 6, 7, 15 are boolean (with missing values) -- need to look at freq of missing to see if better to consider it its own category or impute
# 11 should be dropped before we reach the training phase (as described above), but may be useful as we explore the data for correlations
# 21 is the target vector (Y)

# While the documentation does not explicitly discuss missing values, the demographic categories all have 'unknown' as an option
# (2, 3, 4, 5, 6, 7), and of course we should look for missing values elsewhere as well

In [7]:
# fix target as binary
df['y'].replace('no', 0, inplace = True)
df['y'].replace('yes', 1, inplace = True)

# create lists of variables by type (for later)
cat_var = ['job','marital','education',
              'default','housing','loan',
              'contact','month','day_of_week',
              'poutcome',]
# per the dataset description, we're not going to use 'Duration' in our model, so we can omit it here
num_var = ['age','campaign','pdays','previous','emp.var.rate','cons.price.idx','cons.conf.idx',
                         'euribor3m','nr.employed',]

In [8]:
print 'employment change:' 
print df['emp.var.rate'].unique()
print 'marital status:' 
print df.marital.unique()


employment change:
[ 1.1  1.4 -0.1 -0.2 -1.8 -2.9 -3.4 -3.  -1.7 -1.1]
marital status:
['married' 'single' 'divorced' 'unknown']

In [9]:
# Running these checks for all the variables (omitted for space) 
# didn't turn up anything else surprising / concerning with respect to missing values

# Now we look at how prevalent the 'unknown' values are in the categorical values, to see which could be imputed and which would
# be better left as their own category

In [10]:
print 'job:'
print df.job.value_counts()
print 'marital:'
print df.marital.value_counts()
print 'education:'
print df.education.value_counts()
print 'default:'
print df.default.value_counts()
print 'housing:'
print df.housing.value_counts()
print 'loan:'
print df.loan.value_counts()
print 'poutcome:'
print df.poutcome.value_counts()


job:
admin.         10422
blue-collar     9254
technician      6743
...
unemployed    1014
student        875
unknown        330
Length: 12, dtype: int64
marital:
married     24928
single      11568
divorced     4612
unknown        80
dtype: int64
education:
university.degree      12168
high.school             9515
basic.9y                6045
professional.course     5243
basic.4y                4176
basic.6y                2292
unknown                 1731
illiterate                18
dtype: int64
default:
no         32588
unknown     8597
yes            3
dtype: int64
housing:
yes        21576
no         18622
unknown      990
dtype: int64
loan:
no         33950
yes         6248
unknown      990
dtype: int64
poutcome:
nonexistent    35563
failure         4252
success         1373
dtype: int64

In [11]:
# ok, so it looks like we could consider impute Job, Marital, Education, Default (=no), Housing, Loan
# but not poutcome

# however, if we're on a short turnaround, I think it'll be ok to treat the unknowns as a category for now

In [12]:
numFeat = ['age','campaign','pdays','previous','emp.var.rate','cons.price.idx','cons.conf.idx']
pd.tools.plotting.scatter_matrix(df[numFeat], alpha = 0.2, figsize = (15, 10), diagonal = 'kde');


Move to numpy array for sklearn -- prepare and train models


In [13]:
bank_df = df['y']

# another 'for now': we're going to use dummies instead of LabelEncoder, may try the other way later
for var in cat_var:
    bank_df = pd.concat([bank_df,pd.get_dummies(df[var], prefix=var)],axis=1)

bank_df = pd.concat([bank_df,df[num_var]],axis=1)

In [14]:
from sklearn.cross_validation import cross_val_score,train_test_split
from sklearn.learning_curve import learning_curve
from sklearn import feature_extraction
from sklearn import metrics

In [15]:
'''
The start of an earlier attempt to use LabelEncoder

from sklearn.preprocessing import LabelEncoder

vectorizer = LabelEncoder()
# we need a smarter way of doing this column-wise
df_categ = ['job', 'marital', 'education', 'default', 'housing', 'loan', 'contact', 'month', 'day_of_week', 'poutcome']

X_vec = vectorizer.fit(X[df_categ])
'''


Out[15]:
"\nThe start of an earlier attempt to use LabelEncoder\n\nfrom sklearn.preprocessing import LabelEncoder\n\nvectorizer = LabelEncoder()\n# we need a smarter way of doing this column-wise\ndf_categ = ['job', 'marital', 'education', 'default', 'housing', 'loan', 'contact', 'month', 'day_of_week', 'poutcome']\n\nX_vec = vectorizer.fit(X[df_categ])\n"

In [16]:
X_train, X_test, y_train, y_test = train_test_split(bank_df.drop('y', axis = 1), bank_df['y'], test_size=0.2, random_state=24)

In [17]:
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report

In [18]:
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)

print cross_val_score(knn, X_train, y_train)


[ 0.89184268  0.88709824  0.88636984]

In [19]:
# a good score, considering we're just using the default parameters for k-Nearest Neighbors

# note that we get an array of three cross-validation scores because the default for
# cross_val_score is 3-fold cross-validation

In [20]:
# The plotting code is cribbed liberally from Chad's 'hints' file, because I'm still clumsy with pyplot and this code is complicated
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
                        n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
    plt.figure()
    plt.title(title)
    if ylim is not None:
        plt.ylim(*ylim)
    plt.xlabel("Training examples")
    plt.ylabel("Score")
    train_sizes, train_scores, test_scores = learning_curve(
        estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
    train_scores_mean = np.mean(train_scores, axis=1)
    train_scores_std = np.std(train_scores, axis=1)
    test_scores_mean = np.mean(test_scores, axis=1)
    test_scores_std = np.std(test_scores, axis=1)
    plt.grid()

    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
                     train_scores_mean + train_scores_std, alpha=0.1,
                     color="r")
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
                     test_scores_mean + test_scores_std, alpha=0.1, color="g")
    plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
             label="Training score")
    plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
             label="Cross-validation score")

    plt.legend(loc="best")
    return plt

In [21]:
y_pred = knn.predict(X_test)
print classification_report(y_test, y_pred)


             precision    recall  f1-score   support

          0       0.91      0.97      0.94      7280
          1       0.51      0.27      0.35       958

avg / total       0.86      0.88      0.87      8238


In [22]:
# ah, so the cross-validation scores are mostly based on doing well with negatives (which comprise ~85% of the data),
# we aren't real great at picking the yeses out of the pile

In [23]:
plot_learning_curve(knn, 'KNN Learning Curve', X_train, y_train)


Out[23]:
<module 'matplotlib.pyplot' from 'C:\Anaconda\lib\site-packages\matplotlib\pyplot.pyc'>

In [24]:
# that is a heckuva pretty plot, but I'm not 100% sure how to interpret it...
# looks to me like we're doing the best with cross-validation at the second data point? not sure what this indicates
# for a preferred value of k (if anything)

# the description of the train_sizes parameter in scikit-learn.org isn't really helping, either :( -- if it's a random
# seed for the examples being scored, shouldn't there be a random_state to set?

In [25]:
from sklearn.ensemble import RandomForestClassifier

In [26]:
rfc = RandomForestClassifier(n_estimators=201)
rfc.fit(X_train,y_train)
y_pred = rfc.predict(X_test)
print classification_report(y_test, y_pred)


             precision    recall  f1-score   support

          0       0.91      0.97      0.94      7280
          1       0.55      0.28      0.37       958

avg / total       0.87      0.89      0.87      8238


In [27]:
# random forest with 201 trees is doing better than knn with the positives!

In [28]:
zip(rfc.feature_importances_,bank_df.drop('y', axis = 1).keys())


Out[28]:
[(0.017838649850750017, 'job_admin.'),
 (0.012054891030129961, 'job_blue-collar'),
 (0.0055935321444852536, 'job_entrepreneur'),
 (0.0043437511049777998, 'job_housemaid'),
 (0.0092524350330243088, 'job_management'),
 (0.007169665640596022, 'job_retired'),
 (0.0062955183588327473, 'job_self-employed'),
 (0.0093006406852861427, 'job_services'),
 (0.0054533275909672777, 'job_student'),
 (0.014548944862594266, 'job_technician'),
 (0.0050395523968630928, 'job_unemployed'),
 (0.0022344268583305358, 'job_unknown'),
 (0.010511063367466907, 'marital_divorced'),
 (0.016961356118093927, 'marital_married'),
 (0.015435250354680135, 'marital_single'),
 (0.00044846559735057769, 'marital_unknown'),
 (0.0081869860163264179, 'education_basic.4y'),
 (0.006853944926673055, 'education_basic.6y'),
 (0.01165382823289185, 'education_basic.9y'),
 (0.016774193435635947, 'education_high.school'),
 (0.00033148623289347071, 'education_illiterate'),
 (0.012150293988204296, 'education_professional.course'),
 (0.016555248572044524, 'education_university.degree'),
 (0.0068823873050891018, 'education_unknown'),
 (0.0077130674058278502, 'default_no'),
 (0.0074881430762760801, 'default_unknown'),
 (1.6927008461051429e-07, 'default_yes'),
 (0.020830342640809251, 'housing_no'),
 (0.0025936761196744644, 'housing_unknown'),
 (0.020789000946072153, 'housing_yes'),
 (0.014045363461345654, 'loan_no'),
 (0.002576096619741878, 'loan_unknown'),
 (0.013703761913301072, 'loan_yes'),
 (0.0083595958281715392, 'contact_cellular'),
 (0.0077851329782198074, 'contact_telephone'),
 (0.0035787869216831478, 'month_apr'),
 (0.0024859012161830875, 'month_aug'),
 (0.00096411094248545503, 'month_dec'),
 (0.0026912916843749232, 'month_jul'),
 (0.0030606443416814912, 'month_jun'),
 (0.0049325591288076152, 'month_mar'),
 (0.0051393167751377523, 'month_may'),
 (0.0023650732037179068, 'month_nov'),
 (0.005280691154151195, 'month_oct'),
 (0.0023447651938790426, 'month_sep'),
 (0.014115933337583382, 'day_of_week_fri'),
 (0.014534476354494131, 'day_of_week_mon'),
 (0.014246624118128571, 'day_of_week_thu'),
 (0.014757436097798159, 'day_of_week_tue'),
 (0.014471582494446312, 'day_of_week_wed'),
 (0.0077336238610644318, 'poutcome_failure'),
 (0.0078514364423316104, 'poutcome_nonexistent'),
 (0.021380363221324104, 'poutcome_success'),
 (0.15871862385386251, 'age'),
 (0.082052632989084259, 'campaign'),
 (0.0363041206693396, 'pdays'),
 (0.014615907917689124, 'previous'),
 (0.024330018769484752, 'emp.var.rate'),
 (0.023620663430858158, 'cons.price.idx'),
 (0.028681738038435302, 'cons.conf.idx'),
 (0.11426356335551019, 'euribor3m'),
 (0.047723924522751593, 'nr.employed')]

In [29]:
plot_learning_curve(rfc, 'random forest', X_train, y_train)


Out[29]:
<module 'matplotlib.pyplot' from 'C:\Anaconda\lib\site-packages\matplotlib\pyplot.pyc'>

In [30]:
# hm, that doesn't look as informative -- next step here may be a grid search on the # of trees?

# or a return to the feature_importances_ list... which we may need to scale/normalize to get a lot of meaning from
# for now, it looks like Age is important -- and, since we know the range on it, it's definitely making a bigger impact
# than those dummies we're using for the categorical variables (note: need to add labels to the get_dummies function)

In [31]:
from sklearn.grid_search import GridSearchCV

In [32]:
d = {'n_estimators': (101,201,301,401,501)}

In [33]:
print d


{'n_estimators': (101, 201, 301, 401, 501)}

In [34]:
my_gs = GridSearchCV(RandomForestClassifier(), d)

In [35]:
my_gs.fit(X_train,y_train)


Out[35]:
GridSearchCV(cv=None,
       estimator=RandomForestClassifier(bootstrap=True, compute_importances=None,
            criterion='gini', max_depth=None, max_features='auto',
            max_leaf_nodes=None, min_density=None, min_samples_leaf=1,
            min_samples_split=2, n_estimators=10, n_jobs=1,
            oob_score=False, random_state=None, verbose=0),
       fit_params={}, iid=True, loss_func=None, n_jobs=1,
       param_grid={'n_estimators': (101, 201, 301, 401, 501)},
       pre_dispatch='2*n_jobs', refit=True, score_func=None, scoring=None,
       verbose=0)

In [36]:
my_gs.best_params_, my_gs.best_score_


Out[36]:
({'n_estimators': 401}, 0.89320182094081946)

In [37]:
rfc4 = RandomForestClassifier(n_estimators=401)
rfc4.fit(X_train,y_train)
y_pred4 = rfc4.predict(X_test)
print classification_report(y_test, y_pred4)


             precision    recall  f1-score   support

          0       0.91      0.97      0.94      7280
          1       0.55      0.28      0.37       958

avg / total       0.87      0.89      0.87      8238


In [ ]: