Kaggle San Francisco Crime Classification

Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore

Basic Modeling

Environment and Data


In [39]:
# Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import log_loss
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Import Meta-estimators
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Import Calibration tools
from sklearn.calibration import CalibratedClassifierCV

# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)

DDL to construct table for SQL transformations:

CREATE TABLE kaggle_sf_crime (
dates TIMESTAMP,                                
category VARCHAR,
descript VARCHAR,
dayofweek VARCHAR,
pd_district VARCHAR,
resolution VARCHAR,
addr VARCHAR,
X FLOAT,
Y FLOAT);

Getting training data into a locally hosted PostgreSQL database:

\copy kaggle_sf_crime FROM '/Users/Goodgame/Desktop/MIDS/207/final/sf_crime_train.csv' DELIMITER ',' CSV HEADER;

SQL Query used for transformations:

SELECT
  category,
  date_part('hour', dates) AS hour_of_day,
  CASE
    WHEN dayofweek = 'Monday' then 1
    WHEN dayofweek = 'Tuesday' THEN 2
    WHEN dayofweek = 'Wednesday' THEN 3
    WHEN dayofweek = 'Thursday' THEN 4
    WHEN dayofweek = 'Friday' THEN 5
    WHEN dayofweek = 'Saturday' THEN 6
    WHEN dayofweek = 'Sunday' THEN 7
  END AS dayofweek_numeric,
  X,
  Y,
  CASE
    WHEN pd_district = 'BAYVIEW' THEN 1
    ELSE 0
  END AS bayview_binary,
    CASE
    WHEN pd_district = 'INGLESIDE' THEN 1
    ELSE 0
  END AS ingleside_binary,
    CASE
    WHEN pd_district = 'NORTHERN' THEN 1
    ELSE 0
  END AS northern_binary,
    CASE
    WHEN pd_district = 'CENTRAL' THEN 1
    ELSE 0
  END AS central_binary,
    CASE
    WHEN pd_district = 'BAYVIEW' THEN 1
    ELSE 0
  END AS pd_bayview_binary,
    CASE
    WHEN pd_district = 'MISSION' THEN 1
    ELSE 0
  END AS mission_binary,
    CASE
    WHEN pd_district = 'SOUTHERN' THEN 1
    ELSE 0
  END AS southern_binary,
    CASE
    WHEN pd_district = 'TENDERLOIN' THEN 1
    ELSE 0
  END AS tenderloin_binary,
    CASE
    WHEN pd_district = 'PARK' THEN 1
    ELSE 0
  END AS park_binary,
    CASE
    WHEN pd_district = 'RICHMOND' THEN 1
    ELSE 0
  END AS richmond_binary,
    CASE
    WHEN pd_district = 'TARAVAL' THEN 1
    ELSE 0
  END AS taraval_binary
FROM kaggle_sf_crime;

Load the data into training, development, and test:


In [40]:
data_path = "./data/train_transformed.csv"

df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()

# Impute missing values with mean values:
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()

# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)

# Shuffle data to remove any underlying pattern that may exist:
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]

# Separate training, dev, and test data:
test_data, test_labels = X[800000:], y[800000:]
dev_data, dev_labels = X[700000:800000], y[700000:800000]
train_data, train_labels = X[:700000], y[:700000]

mini_train_data, mini_train_labels = X[:75000], y[:75000]
mini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000]
labels_set = set(mini_dev_labels)
print(labels_set)
print(len(labels_set))


{'VEHICLE THEFT', 'LIQUOR LAWS', 'LARCENY/THEFT', 'SUSPICIOUS OCC', 'WARRANTS', 'VANDALISM', 'SEX OFFENSES FORCIBLE', 'ARSON', 'GAMBLING', 'TRESPASS', 'PROSTITUTION', 'DRUNKENNESS', 'LOITERING', 'MISSING PERSON', 'BURGLARY', 'SECONDARY CODES', 'FAMILY OFFENSES', 'FRAUD', 'EMBEZZLEMENT', 'EXTORTION', 'KIDNAPPING', 'RECOVERED VEHICLE', 'NON-CRIMINAL', 'RUNAWAY', 'BAD CHECKS', 'WEAPON LAWS', 'ASSAULT', 'SUICIDE', 'DRUG/NARCOTIC', 'STOLEN PROPERTY', 'SEX OFFENSES NON FORCIBLE', 'FORGERY/COUNTERFEITING', 'BRIBERY', 'OTHER OFFENSES', 'ROBBERY', 'DRIVING UNDER THE INFLUENCE', 'DISORDERLY CONDUCT'}
37

Loading the data, version 2, with weather features to improve performance:

We seek to add features to our models that will improve performance with respect to out desired performance metric. There is evidence that there is a correlation between weather patterns and crime, with some experts even arguing for a causal relationship between weather and crime [1]. More specifically, a 2013 paper published in Science showed that higher temperatures and extreme rainfall led to large increases in conflict. In the setting of strong evidence that weather influences crime, we see it as a candidate for additional features to improve the performance of our classifiers. Weather data was gathered from (insert source). Certain features from this data set were incorporated into the original crime data set in order to add features that were hypothesizzed to improve performance. These features included (insert what we eventually include).


In [41]:
data_path = "./data/train_transformed.csv"

df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()

########## Adding the date back into the data
import csv
import time
import calendar
data_path = "./data/train.csv"
dataCSV = open(data_path, 'rt')
csvData = list(csv.reader(dataCSV))
csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
allData = csvData[1:]
dataCSV.close()

df2 = pd.DataFrame(allData)
df2.columns = csvFields
dates = df2['Dates']
dates = dates.apply(time.strptime, args=("%Y-%m-%d %H:%M:%S",))
dates = dates.apply(calendar.timegm)
print(dates.head())
#dates = pd.to_datetime(dates)

x_data['secondsFromEpoch'] = dates
colnames = x_data.columns.tolist()
colnames = colnames[-1:] + colnames[:-1]
x_data = x_data[colnames]
##########

########## Adding the weather data into the original crime data
weatherData1 = "./data/1027175.csv"
weatherData2 = "./data/1027176.csv"
dataCSV = open(weatherData1, 'rt')
csvData = list(csv.reader(dataCSV))
csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
allWeatherData1 = csvData[1:]
dataCSV.close()

dataCSV = open(weatherData2, 'rt')
csvData = list(csv.reader(dataCSV))
csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
allWeatherData2 = csvData[1:]
dataCSV.close()

weatherDF1 = pd.DataFrame(allWeatherData1)
weatherDF1.columns = csvFields
dates1 = weatherDF1['DATE']

weatherDF2 = pd.DataFrame(allWeatherData2)
weatherDF2.columns = csvFields
dates2 = weatherDF2['DATE']

dates1 = dates1.apply(time.strptime, args=("%Y-%m-%d %H:%M",))
dates1 = dates1.apply(calendar.timegm)
dates2 = dates2.apply(time.strptime, args=("%Y-%m-%d %H:%M",))
dates2 = dates2.apply(calendar.timegm)
weatherDF1['DATE'] = dates1
weatherDF2['DATE'] = dates2
weatherDF = pd.concat([weatherDF1,weatherDF2[32:]],ignore_index=True)

# Starting off with some of the easier features to work with-- more to come here . . . still in beta
weatherMetrics = weatherDF[['DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYDewPointTempF', 'HOURLYWindSpeed', 'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY']]
weatherMetrics = weatherMetrics.convert_objects(convert_numeric=True)
weatherDates = weatherMetrics['DATE']
#'DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYDewPointTempF', 'HOURLYWindSpeed',
#'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY'
timeWindow = 10800 #3 hours
hourlyDryBulbTemp = []
hourlyRelativeHumidity = []
hourlyDewPointTemp = []
hourlyWindSpeed = []
hourlySeaLevelPressure = []
hourlyVisibility = []
test = 0
for timePoint in dates:#dates is the epoch time from the kaggle data
    relevantWeather = weatherMetrics[(weatherDates <= timePoint) & (weatherDates > timePoint - timeWindow)]
    hourlyDryBulbTemp.append(relevantWeather['HOURLYDRYBULBTEMPF'].mean())
    hourlyRelativeHumidity.append(relevantWeather['HOURLYRelativeHumidity'].mean())
    hourlyDewPointTemp.append(relevantWeather['HOURLYDewPointTempF'].mean())
    hourlyWindSpeed.append(relevantWeather['HOURLYWindSpeed'].mean())
    hourlySeaLevelPressure.append(relevantWeather['HOURLYSeaLevelPressure'].mean())
    hourlyVisibility.append(relevantWeather['HOURLYVISIBILITY'].mean())
    if test%100000 == 0:
        print(relevantWeather)
    test += 1

hourlyDryBulbTemp = pd.Series.from_array(np.array(hourlyDryBulbTemp))
hourlyRelativeHumidity = pd.Series.from_array(np.array(hourlyRelativeHumidity))
hourlyDewPointTemp = pd.Series.from_array(np.array(hourlyDewPointTemp))
hourlyWindSpeed = pd.Series.from_array(np.array(hourlyWindSpeed))
hourlySeaLevelPressure = pd.Series.from_array(np.array(hourlySeaLevelPressure))
hourlyVisibility = pd.Series.from_array(np.array(hourlyVisibility))

x_data['HOURLYDRYBULBTEMPF'] = hourlyDryBulbTemp
x_data['HOURLYRelativeHumidity'] = hourlyRelativeHumidity
x_data['HOURLYDewPointTempF'] = hourlyDewPointTemp
x_data['HOURLYWindSpeed'] = hourlyWindSpeed
x_data['HOURLYSeaLevelPressure'] = hourlySeaLevelPressure
x_data['HOURLYVISIBILITY'] = hourlyVisibility

#x_data.to_csv(path_or_buf="C:/MIDS/W207 final project/x_data.csv")
##########

# Impute missing values with mean values:
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()

# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)

# Shuffle data to remove any underlying pattern that may exist:
shuffle = np.random.permutation(np.arange(X.shape[0]))
#X, y = X[shuffle], y[shuffle]

# Separate training, dev, and test data:
#test_data, test_labels = X[800000:], y[800000:]
#dev_data, dev_labels = X[700000:800000], y[700000:800000]
#train_data, train_labels = X[:700000], y[:700000]

#mini_train_data, mini_train_labels = X[:75000], y[:75000]
#mini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000]

#print(train_data[:10])


---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-41-a54b7155da7b> in <module>()
     10 import calendar
     11 data_path = "./data/train.csv"
---> 12 dataCSV = open(data_path, 'rt')
     13 csvData = list(csv.reader(dataCSV))
     14 csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']

FileNotFoundError: [Errno 2] No such file or directory: './data/train.csv'

Formatting to meet Kaggle submission standards


In [42]:
# The Kaggle submission format requires listing the ID of each example.
# This is to remember the order of the IDs after shuffling
allIDs = np.array(list(df.axes[0]))
allIDs = allIDs[shuffle]

testIDs = allIDs[800000:]
devIDs = allIDs[700000:800000]
trainIDs = allIDs[:700000]

# Extract the column names for the required submission format
sampleSubmission_path = "./data/sampleSubmission.csv"
sampleDF = pd.read_csv(sampleSubmission_path)
allColumns = list(sampleDF.columns)
featureColumns = allColumns[1:]

# Extracting the test data for a baseline submission
real_test_path = "./data/test_transformed.csv"
testDF = pd.read_csv(real_test_path, header=0)
real_test_data = testDF

test_complete = real_test_data.fillna(real_test_data.mean())
Test_raw = test_complete.as_matrix()

TestData = MinMaxScaler().fit_transform(Test_raw)

# Here we remember the ID of each test data point, in case we ever decide to shuffle the test data for some reason
testIDs = list(testDF.axes[0])

Generate baseline prediction probabilities from MNB classifier and store in a .csv file


In [43]:
# Generate a baseline MNB classifier and make it return prediction probabilities for the actual test data
def MNB():
    mnb = MultinomialNB(alpha = 0.0000001)
    mnb.fit(train_data, train_labels)
    #print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels))
    return mnb.predict_proba(real_test_data)
MNB()

baselinePredictionProbabilities = MNB()

# Place the resulting prediction probabilities in a .csv file in the required format
# First, turn the prediction probabilties into a data frame
resultDF = pd.DataFrame(baselinePredictionProbabilities,columns=featureColumns)
# Add the IDs as a final column
resultDF.loc[:,'Id'] = pd.Series(testIDs,index=resultDF.index)
# Make the 'Id' column the first column
colnames = resultDF.columns.tolist()
colnames = colnames[-1:] + colnames[:-1]
resultDF = resultDF[colnames]
# Output to a .csv file
resultDF.to_csv('result.csv',index=False)

Note: the code above will shuffle data differently every time it's run, so model accuracies will vary accordingly.


In [44]:
## Data sub-setting quality check-point
print(train_data[:1])
print(train_labels[:1])


[[  7.391e-01   6.667e-01   6.117e-02   9.047e-04   1.000e+00   0.000e+00
    0.000e+00   0.000e+00   1.000e+00   0.000e+00   0.000e+00   0.000e+00
    0.000e+00   0.000e+00   0.000e+00]]
['WEAPON LAWS']

In [45]:
# Modeling quality check-point with MNB--fast model

def MNB():
    mnb = MultinomialNB(alpha = 0.0000001)
    mnb.fit(train_data, train_labels)
    print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels))
    
MNB()



MultinomialNB accuracy on dev data: 0.22314

Defining Performance Criteria

As determined by the Kaggle submission guidelines, the performance criteria metric for the San Francisco Crime Classification competition is Multi-class Logarithmic Loss (also known as cross-entropy). There are various other performance metrics that are appropriate for different domains: accuracy, F-score, Lift, ROC Area, average precision, precision/recall break-even point, and squared error.

(Describe each performance metric and a domain in which it is preferred. Give Pros/Cons if able)

  • Multi-class Log Loss:

  • Accuracy:

  • F-score:

  • Lift:

  • ROC Area:

  • Average precision

  • Precision/Recall break-even point:

  • Squared-error:

Model Prototyping

We will start our classifier and feature engineering process by looking at the performance of various classifiers with default parameter settings in predicting labels on the mini_dev_data:


In [46]:
def model_prototype(train_data, train_labels, eval_data, eval_labels):
    knn = KNeighborsClassifier(n_neighbors=5).fit(train_data, train_labels)
    bnb = BernoulliNB(alpha=1, binarize = 0.5).fit(train_data, train_labels)
    mnb = MultinomialNB().fit(train_data, train_labels)
    log_reg = LogisticRegression().fit(train_data, train_labels)
    support_vm = svm.SVC().fit(train_data, train_labels)
    neural_net = MLPClassifier().fit(train_data, train_labels)
    random_forest = RandomForestClassifier().fit(train_data, train_labels)
    decision_tree = DecisionTreeClassifier().fit(train_data, train_labels)
    
    models = [knn, bnb, mnb, log_reg, support_vm, neural_net, random_forest, decision_tree]
    for model in models:
        eval_prediction_probabilities = model.predict_proba(eval_data)
        eval_predictions = model.predict(eval_data)
        set_eval_predictions = set(eval_predictions)
        crime_labels = list(set_eval_predictions)
        print(model, "Multi-class Log Loss:", log_loss(y_true = eval_labels, y_pred = eval_prediction_probabilities, labels = crime_labels), "\n\n")

# model_prototype(mini_train_data, mini_train_labels, mini_dev_data, mini_dev_labels)

Adding Features, Hyperparameter Tuning, and Model Calibration To Improve Prediction For Each Classifier

Here we seek to optimize the performance of our classifiers in a three-step, dynamnic engineering process.

1) Feature addition

We previously added components from the weather data into the original SF crime data as new features. Here, we will incorporate those features into our classifiers to determine whether or not they improve performance as hypothesized.

2) Hyperparameter tuning

Each classifier has parameters that we can engineer to further optimize performance, as opposed to using the default parameter values as we did above in the model prototyping cell. (Will likely use pipeline here in future)

3) Model calibration

We can calibrate the models via Platt Scaling or Isotonic Regression to attempt to improve their performance.

  • Platt Scaling: (brief explanation of how it works)

  • Isotonic Regression: ((brief explanation of how it works))

Feature Addition

(Decide whether to do this as one big step, or to engineer the addition of individual features for individual classifiers. Can likely do all feature addition here, continuing on the work done with the weather data import completed in earlier cells)

K-Nearest Neighbors

Feature addition:
Hyperparameter tuning:

For the KNN classifier, we can seek to optimize the following classifier parameters: n-neighbors, weights, and the power parameter ('p').


In [ ]:
# def k_neighbors_tuned():
    
    k_value_tuning = [i for i in range(1,100,2)]
    weight_tuning = ['uniform', 'distance']
    power_parameter_tuning = [1,2]
    for k in k_value_tuning:
        for w in weight_tuning:
            for p in power_parameter_tuning:
                tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(train_data, train_labels)
                dev_preds = tuned_KNN.predict(dev_data)
                set_dev_preds = set(dev_preds)
                crime_labels_tuned = list(set_dev_preds)
                dev_prediction_probabilities = tuned_KNN.predict_proba(dev_data)
                print(tuned_KNN, "Multi-class Log Loss:", log_loss(y_true = dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_tuned), "\n\n")

# k_neighbors_tuned()
Model calibration:

We will consider embeding this step within the for loop for the hyperparameter tuning. More likely we will pipeline it along with the hyperparameter tuning steps. We will then use GridSearchCV top find the optimized parameters based on our performance metric of Mutli-Class Log Loss.


In [ ]:
# def k_calibrated():

# Here we will calibrate the KNN classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV
# with various parameter settings.  The "method" parameter can be set to "sigmoid" or to "isotonic", 
# corresponding to Platt Scaling and to Isotonic Regression respectively.

    methods = ['signoid', 'isotonic']
    for m in methods:
        CCV_for_KNN = CalibratedClassifierCV()
        
        # Will likely embed this step within the for loop for the hyperparameter tuning as that makes more sense. #
        # Or will pipeline it along with the hyperparameter tuning steps #

# k_calibrated()

Multinomial, Bernoulli, and Gaussian Naive Bayes


In [ ]:
def GNB():
    gnb = GaussianNB()
    gnb.fit(train_data, train_labels)
    print("GaussianNB accuracy on dev data:", 
          gnb.score(dev_data, dev_labels))
    
    # Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes
    # adding noise can improve performance by making the data more normal:
    train_data_noise = np.random.rand(train_data.shape[0],train_data.shape[1])
    modified_train_data = np.multiply(train_data,train_data_noise)    
    gnb_noise = GaussianNB()
    gnb.fit(modified_train_data, train_labels)
    print("GaussianNB accuracy with added noise:", 
          gnb.score(dev_data, dev_labels))    
    
# Going slightly deeper with hyperparameter tuning and model calibration:
def BNB(alphas):
    
    bnb_one = BernoulliNB(binarize = 0.5)
    bnb_one.fit(train_data, train_labels)
    print("\n\nBernoulli Naive Bayes accuracy when alpha = 1 (the default value):",
          bnb_one.score(dev_data, dev_labels))
    
    bnb_zero = BernoulliNB(binarize = 0.5, alpha=0)
    bnb_zero.fit(train_data, train_labels)
    print("BNB accuracy when alpha = 0:", bnb_zero.score(dev_data, dev_labels))
    
    bnb = BernoulliNB(binarize=0.5)
    clf = GridSearchCV(bnb, param_grid = alphas)
    clf.fit(train_data, train_labels)
    print("Best parameter for BNB on the dev data:", clf.best_params_)
    
    clf_tuned = BernoulliNB(binarize = 0.5, alpha=0.00000000000000000000001)
    clf_tuned.fit(train_data, train_labels)
    print("Accuracy using the tuned Laplace smoothing parameter:", 
          clf_tuned.score(dev_data, dev_labels), "\n\n")
    

def investigate_model_calibration(buckets, correct, total):
    clf_tuned = BernoulliNB(binarize = 0.5, alpha=0.00000000000000000000001)
    clf_tuned.fit(train_data, train_labels)
    
    # Establish data sets
    pred_probs = clf_tuned.predict_proba(dev_data)
    max_pred_probs = np.array(pred_probs.max(axis=1))
    preds = clf_tuned.predict(dev_data)
        
    # For each bucket, look at the predictions that the model yields. 
    # Keep track of total & correct predictions within each bucket.
    bucket_bottom = 0
    bucket_top = 0
    for bucket_index, bucket in enumerate(buckets):
        bucket_top = bucket
        for pred_index, pred in enumerate(preds):
            if (max_pred_probs[pred_index] <= bucket_top) and (max_pred_probs[pred_index] > bucket_bottom):
                total[bucket_index] += 1
                if preds[pred_index] == dev_labels[pred_index]:
                    correct[bucket_index] += 1
        bucket_bottom = bucket_top

def MNB():
    mnb = MultinomialNB(alpha = 0.0000001)
    mnb.fit(train_data, train_labels)
    print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels))

alphas = {'alpha': [0.00000000000000000000001, 0.0000001, 0.0001, 0.001, 
                    0.01, 0.1, 0.0, 0.5, 1.0, 2.0, 10.0]}
buckets = [0.5, 0.9, 0.99, 0.999, .9999, 0.99999, 1.0]
correct = [0 for i in buckets]
total = [0 for i in buckets]

MNB()
GNB()
BNB(alphas)
investigate_model_calibration(buckets, correct, total)

for i in range(len(buckets)):
   accuracy = 0.0
   if (total[i] > 0): accuracy = correct[i] / total[i]
   print('p(pred) <= %.13f    total = %3d    accuracy = %.3f' %(buckets[i], total[i], accuracy))

The Bernoulli Naive Bayes and Multinomial Naive Bayes models can predict whether a loan will be good or bad with XXX% accuracy.

Hyperparameter tuning:

We will prune the work above. Will seek to optimize the alpha parameter (Laplace smoothing parameter) for MNB and BNB classifiers.

Model calibration:

Here we will calibrate the MNB, BNB and GNB classifiers with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively. Will likely embed this step within the for loop for the hyperparameter tuning as that makes more sense. Or will pipeline it along with the hyperparameter tuning steps. We will then use GridSearchCV top find the optimized parameters based on our performance metric of Mutli-Class Log Loss.

THE REST OF THE MODEL CALIBRATION SECTIONS ARE SIMILAR AND THE OUTLINE WILL NOT BE REPEATED AS WOULD BE REDUNDANT.

Logistic Regression

Hyperparameter tuning:

For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag')

Model calibration:

See above

Decision Tree

Hyperparameter tuning:

For the Decision Tree classifier, we can seek to optimize the following classifier parameters: min_samples_leaf (the minimum number of samples required to be at a leaf node), max_depth

From readings, setting min_samples_leaf to approximately 1% of the data points can stop the tree from inappropriately classifying outliers, which can help to improve accuracy (unsure if significantly improves MCLL).

Model calibration:

See above

Support Vector Machines

Hyperparameter tuning:

For the SVM classifier, we can seek to optimize the following classifier parameters: C (penalty parameter C of the error term), kernel ('linear', 'poly', 'rbf', sigmoid', or 'precomputed')

See source [2] for parameter optimization in SVM

Model calibration:

See above

Neural Nets

Hyperparameter tuning:

For the Neural Networks MLP classifier, we can seek to optimize the following classifier parameters: hidden_layer_sizes, activation ('identity', 'logistic', 'tanh', 'relu'), solver ('lbfgs','sgd', adam'), alpha, learning_rate ('constant', 'invscaling','adaptive')

Model calibration:

See above

Random Forest

Hyperparameter tuning:

For the Random Forest classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_features, max_depth, min_samples_leaf, bootstrap (whether or not bootstrap samples are used when building trees), oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy)

Model calibration:

See above

Meta-estimators

AdaBoost Classifier

Hyperparameter tuning:

There are no major changes that we seek to make in the AdaBoostClassifier with respect to default parameter values.

Adaboosting each classifier:

We will run the AdaBoostClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.

Bagging Classifier

Hyperparameter tuning:

For the Bagging meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_samples, max_features, bootstrap (whether or not bootstrap samples are used when building trees), bootstrap_features (whether features are drawn with replacement), and oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy)

Bagging each classifier:

We will run the BaggingClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.

Gradient Boosting Classifier

Hyperparameter tuning:

For the Gradient Boosting meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_depth, min_samples_leaf, and max_features

Gradient Boosting each classifier:

We will run the GradientBoostingClassifier with loss = 'deviance' (as loss = 'exponential' uses the AdaBoost algorithm) on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.

Final evaluation on test data


In [ ]:
# Here we will likely use Pipeline and GridSearchCV in order to find the overall classifier with optimized Multi-class Log Loss.
# This will be the last step after all attempts at feature addition, hyperparameter tuning, and calibration are completed
# and the corresponding performance metrics are gathered.

References

1) Hsiang, Solomon M. and Burke, Marshall and Miguel, Edward. "Quantifying the Influence of Climate on Human Conflict". Science, Vol 341, Issue 6151, 2013

2) Huang, Cheng-Lung. Wang, Chieh-Jen. "A GA-based feature selection and parameters optimization for support vector machines". Expert Systems with Applications, Vol 31, 2006, p 231-240

3) More to come


In [ ]: