Random forests are a type of machine learning technique in which an ensemble of decision trees are built and the predictions of the decision tree are averaged or the majority vote is taken as the final prediction. Each decision tree is trained with some stochasticity to decrease bias at the cost of variance.
We do basic feature extraction in which we only keep 10 out of 39 features and convert these into 1-of-k encoding. Even with this basic feature extraction we get a prediction accuracy of 79%.
By including more features and performing some more advanced feature engineering, we can reach prediction accuracies up to 83% (not shown in this notebook).
In [1]:
from sklearn.tree import DecisionTreeClassifier as Tree
from sklearn.ensemble import RandomForestClassifier as Forest
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
%matplotlib qt
In [4]:
train_data = pd.read_csv("WaterPump-training-values.csv")
train_labels = pd.read_csv("WaterPump-training-labels.csv")
N = train_data.shape[0]
In [23]:
#picking features that we want to keep
features = ['longitude','latitude','gps_height','population','construction_year','water_quality','quantity','region_code',
'source','waterpoint_type']
train = train_data[features]
#converting categorical features to 1-of-k representation
train1 = pd.concat([train_data, pd.get_dummies(train['water_quality']), pd.get_dummies(train['quantity']),
pd.get_dummies(train['source']), pd.get_dummies(train['waterpoint_type'])], axis=1)
#removing the categorical features after we converted them
train1 = train1.drop(['water_quality','quantity','region_code', 'source', 'waterpoint_type'], axis=1, inPlace=True)
In [24]:
#separating dataset into training and testing for cross-validation
test_idx = np.random.uniform(0, 1, len(train1)) <= 0.9
train = train1[test_idx==True]
trainLabels = train_labels[test_idx==True]
test = train1[test_idx==False]
testLabels = train_labels[test_idx==False]
In [26]:
#training the random forest
forest = Forest(n_estimators=100,criterion='gini')
forest.fit(train,trainLabels['status_group'])
Out[26]:
In [29]:
#making predictions on the withheld data
preds = forest.predict(test)
accuracy = np.where(preds==testLabels['status_group'], 1, 0).sum() / float(len(test))
#print "Neighbors: %d, Accuracy: %3f" % (n, accuracy)
print accuracy
Above is the prediction accuracy of 79%!
Below we can look at which features have the most predictive power. These tend to be nearer to the root of the decision trees. We can also steal these results for our model!
In [33]:
#importance of each data feature that we kept
importances = zip(forest.feature_importances_,list(train.columns.values))
In [35]:
sorted(importances,key=lambda x: x[0])
Out[35]:
In [ ]: