Exercise 08

Feature selection exercise with Titanic data

We'll be working with a dataset from Kaggle's Titanic competition: data, data dictionary

Goal: Predict survival based on passenger characteristics

The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.

One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.

In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.

Read the data into Pandas


In [28]:
import pandas as pd
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/titanic.csv'
titanic = pd.read_csv(url, index_col='PassengerId')

titanic.Age.fillna(titanic.Age.median(), inplace=True)
titanic.loc[titanic.Embarked.isnull(), 'Embarked'] = titanic.Embarked.mode().values

titanic['Sex_Female'] = titanic.Sex.map({'male':0, 'female':1})
embarked_dummies = pd.get_dummies(titanic.Embarked, prefix='Embarked')
titanic = pd.concat([titanic, embarked_dummies], axis=1)

titanic['Age2'] = titanic['Age'] ** 2
titanic['Age3'] = titanic['Age'] ** 3

features = ['Pclass', 'Age', 'Age2', 'Age3', 'Parch', 'SibSp', 'Sex_Female', 'Embarked_C', 'Embarked_Q', 'Embarked_S'] 
X = titanic[list(features)]
y = titanic['Survived']

Exercise 8.1 (2 points)

Using the univariate selection method SelectKBest, which is the value of k that maximizes the accuracy of the model


In [41]:
from sklearn.feature_selection import SelectKBest
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import LogisticRegression

selection_method = []
for i in range(1,11):
    logreg = LogisticRegression(C=1e9)
    selection_method = SelectKBest(k=i)
    selection_method.fit(X, y)
    X_selection_method = selection_method.transform(X)
    print(i,pd.Series(cross_val_score(logreg, X_selection_method, y, cv=10, scoring='accuracy')).mean())


1 0.7866981613891728
2 0.7866981613891728
3 0.7710552150720689
4 0.7710552150720689
5 0.7777970718420157
6 0.7935413119963682
7 0.8003206219498356
8 0.8036789240721826
9 0.7298334468278288
10 0.7107323232323232

In [42]:
from sklearn.feature_selection import SelectKBest

selection_method8= SelectKBest(k=8)
selection_method8.fit(X, y)
selection_method8.get_support()


Out[42]:
array([ True,  True,  True, False,  True,  True,  True,  True, False,  True], dtype=bool)

In [43]:
X_selection_method8 = selection_method8.transform(X)
pd.Series(cross_val_score(logreg, X_selection_method8, y, cv=10, scoring='accuracy')).mean()


Out[43]:
0.8036789240721826

Exercise 8.2 (2 points)

Using the univariate selection method SelectPercentile, which is the value of percentile that maximizes the accuracy of the model


In [44]:
from sklearn.feature_selection import SelectPercentile, f_classif

selection_method = []
for i in range(1,100,10):
    logreg = LogisticRegression(C=1e9)
    selection_method = SelectPercentile(f_classif, percentile=i)
    selection_method.fit(X, y)
    X_selection_method = selection_method.transform(X)
    print(i,pd.Series(cross_val_score(logreg, X_selection_method, y, cv=10, scoring='accuracy')).mean())


1 0.7866981613891728
11 0.7866981613891728
21 0.7866981613891728
31 0.7710552150720689
41 0.7710552150720689
51 0.7777970718420157
61 0.7935413119963682
71 0.8003206219498356
81 0.8036789240721826
91 0.7298334468278288

In [46]:
selection_method8= SelectPercentile(f_classif, percentile=80)
selection_method8.fit(X, y)
X_selection_method8 = selection_method8.transform(X)
pd.Series(cross_val_score(logreg, X_selection_method8, y, cv=10, scoring='accuracy')).mean()


Out[46]:
0.8036789240721826

Exercise 8.3 (3 points)

Using the recursive feature selection method RFE, which is the value of n_features_to_select that maximizes the accuracy of the model


In [45]:
from sklearn.feature_selection import RFE

selection_method = []
for i in range(1,11):
    logreg = LogisticRegression(C=1e9)
    selection_method = RFE(estimator=logreg, n_features_to_select=i)
    selection_method.fit(X, y)
    X_selection_method = selection_method.transform(X)
    print(i,pd.Series(cross_val_score(logreg, X_selection_method, y, cv=10, scoring='accuracy')).mean())


1 0.7866981613891728
2 0.7866981613891728
3 0.7710552150720689
4 0.7710552150720689
5 0.7845264442174554
6 0.7845264442174554
7 0.7811806264896153
8 0.7980484621495857
9 0.8036916922029281
10 0.7107323232323232

In [47]:
selection_method9=     selection_method = RFE(estimator=logreg, n_features_to_select=9)
selection_method9.fit(X, y)
X_selection_method9 = selection_method9.transform(X)
pd.Series(cross_val_score(logreg, X_selection_method9, y, cv=10, scoring='accuracy')).mean()


Out[47]:
0.8036916922029281