In [ ]:
The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.
One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.
In [ ]:
In [ ]:
In [59]:
# #Import Python Libraries
import pandas as pd
import numpy as np
from sklearn import linear_model
import matplotlib.pyplot as plt
%matplotlib inline
In [ ]:
In [10]:
# #Read the Training Dataset into a Pandas Dataframe
titanic_train = pd.read_csv("train.csv")
In [9]:
type(titanic_train)
Out[9]:
In [ ]:
In [20]:
# #Overview of the data
titanic_train
Out[20]:
In [ ]:
In [22]:
# #Shape of the Dataframe - The dataframe contains 891 Rows and 12 Columns
titanic_train.shape
Out[22]:
In [ ]:
PassengerId - Numerical - Contains a unique id for each passenger(auto-incremented)
Survived - Categorical - 0 = Didn't Survive | 1 = Survived
Pclass(Passenger Class) - Categorical - 1 = 1st | 2 = 2nd | 3 = 3rd;
Pclass serves as a proxy for socio-economic status - 1st ~ Upper; 2nd ~ Middle; 3rd ~ Lower
Name - Name
Sex - Categorical - Male | Female
Age - Numeical
SibSp(Number of Siblings/Spouses Aboard)
Sibling: Brother, Sister, Stepbrother, or Stepsister of Passenger Aboard Titanic
Spouse: Husband or Wife of Passenger Aboard Titanic (Mistresses and Fiances Ignored)
Parch(Number of Parents/Children Aboard)
Parent: Mother or Father of Passenger Aboard Titanic
Child: Son, Daughter, Stepson, or Stepdaughter of Passenger Aboard Titanic
Ticket - Ticket Number
Fare - Passenger Fare
Cabin - Cabin
Embarked - Port of Embarkation - C = Cherbourg | Q = Queenstown | S = Southampton
In [ ]:
In [24]:
# #The essence of the problem is that we are trying to predict whether a passenger aboard the titanic survived or not
# #depending on the various features in this training dataset. Therefore, depending on independent variables such as
# #- Pclass, Sex, Age etc. our goal is to predict the dependent variable - Survived; that contains the value 0 for
# #Didn't Survive and 1 for Survived. Furthermore, after building the model we will check this prediction on the Test dataset.
In [ ]:
In [31]:
pd.value_counts(titanic_train["Survived"])
Out[31]:
In [ ]:
In [35]:
# #There are certain columns in our dataset that might be very helpful as features and can therefore be dropped.
titanic_train_reduced = titanic_train.drop(["Name", "Ticket", "Cabin"], axis = 1)
In [68]:
# #To clean the dataset, we could use methods such as multiple imputation to fill in the NA elements.
# #BUT, to start with we can have an extremely clean dataset by dropping all the rows which contains one or more NA elements.
titanic_train_cleaned = titanic_train_reduced.dropna()
In [70]:
# #Convert Categorical Variables into Numerical Variables
titanic_train_cleaned.Sex = titanic_train_cleaned.Sex.apply(lambda sex: 1 if sex == "male" else 0)
titanic_train_cleaned
Out[70]:
In [ ]:
In [71]:
# #From 891 rows in our original dataset, we have come down to 712 "clean" rows
titanic_train_cleaned.shape
Out[71]:
In [ ]:
In [ ]:
In [ ]:
In [72]:
titanic_train_cleaned.groupby([titanic_train_cleaned.Survived, titanic_train_cleaned.Sex]).size()
Out[72]:
In [73]:
# #Percentage of Women who survived
195*100/(195+64)
Out[73]:
In [74]:
# #Percentage of Men who survived
93*100/(93+360)
Out[74]:
In [ ]:
In [90]:
# #TESTING DATASET
titanic_test = pd.read_csv("test.csv")
In [92]:
titanic_test_reduced = titanic_test.drop(["Name", "Ticket", "Cabin"], axis = 1)
titanic_test_cleaned = titanic_test_reduced.dropna()
titanic_test_cleaned.Sex = titanic_test_cleaned.Sex.apply(lambda sex: 1 if sex == "male" else 0)
titanic_test_cleaned
Out[92]:
In [ ]:
In [ ]:
In [ ]:
In [75]:
# #LOGISTIC REGRESSION
In [76]:
model_1 = linear_model.LogisticRegression()
In [80]:
model_1_dependent_vars = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare"]
In [84]:
titanic_train_cleaned[model_1_dependent_vars].values
Out[84]:
In [ ]:
In [85]:
model_1.fit(titanic_train_cleaned[model_1_dependent_vars].values, titanic_train_cleaned.Survived.values)
Out[85]:
In [97]:
model_1_result = model_1.predict(titanic_test_cleaned[model_1_dependent_vars])
model_1_result
Out[97]:
In [ ]:
In [98]:
len(model_1_result)
Out[98]:
In [ ]: