We'll be working with a dataset from Kaggle's Titanic competition: data, data dictionary
Goal: Predict survival based on passenger characteristics
The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.
One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.
Read the data into Pandas
In [28]:
import pandas as pd
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/titanic.csv'
titanic = pd.read_csv(url, index_col='PassengerId')
titanic.Age.fillna(titanic.Age.median(), inplace=True)
titanic.loc[titanic.Embarked.isnull(), 'Embarked'] = titanic.Embarked.mode().values
titanic['Sex_Female'] = titanic.Sex.map({'male':0, 'female':1})
embarked_dummies = pd.get_dummies(titanic.Embarked, prefix='Embarked')
titanic = pd.concat([titanic, embarked_dummies], axis=1)
titanic['Age2'] = titanic['Age'] ** 2
titanic['Age3'] = titanic['Age'] ** 3
features = ['Pclass', 'Age', 'Age2', 'Age3', 'Parch', 'SibSp', 'Sex_Female', 'Embarked_C', 'Embarked_Q', 'Embarked_S']
X = titanic[list(features)]
y = titanic['Survived']
In [41]:
from sklearn.feature_selection import SelectKBest
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import LogisticRegression
selection_method = []
for i in range(1,11):
logreg = LogisticRegression(C=1e9)
selection_method = SelectKBest(k=i)
selection_method.fit(X, y)
X_selection_method = selection_method.transform(X)
print(i,pd.Series(cross_val_score(logreg, X_selection_method, y, cv=10, scoring='accuracy')).mean())
In [42]:
from sklearn.feature_selection import SelectKBest
selection_method8= SelectKBest(k=8)
selection_method8.fit(X, y)
selection_method8.get_support()
Out[42]:
In [43]:
X_selection_method8 = selection_method8.transform(X)
pd.Series(cross_val_score(logreg, X_selection_method8, y, cv=10, scoring='accuracy')).mean()
Out[43]:
In [44]:
from sklearn.feature_selection import SelectPercentile, f_classif
selection_method = []
for i in range(1,100,10):
logreg = LogisticRegression(C=1e9)
selection_method = SelectPercentile(f_classif, percentile=i)
selection_method.fit(X, y)
X_selection_method = selection_method.transform(X)
print(i,pd.Series(cross_val_score(logreg, X_selection_method, y, cv=10, scoring='accuracy')).mean())
In [46]:
selection_method8= SelectPercentile(f_classif, percentile=80)
selection_method8.fit(X, y)
X_selection_method8 = selection_method8.transform(X)
pd.Series(cross_val_score(logreg, X_selection_method8, y, cv=10, scoring='accuracy')).mean()
Out[46]:
In [45]:
from sklearn.feature_selection import RFE
selection_method = []
for i in range(1,11):
logreg = LogisticRegression(C=1e9)
selection_method = RFE(estimator=logreg, n_features_to_select=i)
selection_method.fit(X, y)
X_selection_method = selection_method.transform(X)
print(i,pd.Series(cross_val_score(logreg, X_selection_method, y, cv=10, scoring='accuracy')).mean())
In [47]:
selection_method9= selection_method = RFE(estimator=logreg, n_features_to_select=9)
selection_method9.fit(X, y)
X_selection_method9 = selection_method9.transform(X)
pd.Series(cross_val_score(logreg, X_selection_method9, y, cv=10, scoring='accuracy')).mean()
Out[47]: