Kaggle Titanic Competition

This Jupyter Notebook examines how to use Python's scikit-learn module to create and train Decision Tree machine learning models. It does so specifically within the context of the Kaggle Titanic Competition.

kaggle.com is a website which hosts machine learning competitions where experts can win cash prizes and those new to machine learning can learn the ropes in a practical manner by exploring using various techniques to solve the same problem.

Kaggle's Titanic: Machine Learning from Disaster competition is their most popular competition for beginners and those new to data science and machine learning. It contains numerous tutorials and examples.

The particular solution presented here is based heavily on the excellent free Kaggle Python tutorial from DataCamp.

When the Titanic sank, 1502 of the 2224 passengers and crew were killed. One of the main reasons for this high level of casualties was the lack of lifeboats on this self-proclaimed "unsinkable" ship.

Those that have seen the movie know that some individuals were more likely to survive the sinking (lucky Rose) than others (poor Jack). In this example, you will learn how to apply machine learning techniques to predict a passenger's chance of surviving using Python.

Downloading the Datasets

First we can use Python's builtin os package to determine if the training and test set CSV files already exist locally.

If they do not exist, then we can use Python's urllib package to download them from DataCamp.


In [1]:
# import os and urllib
import os

# For Python 3.x the import should be urllib.request, but for Python 2.x it should just be urllib
try:
    from urllib.request import urlretrieve
except ImportError:
    from urllib import urlretrieve

In [2]:
# Make sure data directory exists
data_dir = 'data/kaggle/titanic'
if not os.path.isdir(data_dir):
    os.makedirs(data_dir)

In [3]:
# Filenames for the train and test datasets
train_csv = os.path.join(data_dir, 'train.csv')
test_csv = os.path.join(data_dir, 'test.csv')

# If the data doesn't already exist locally, then download it using urlretrieve
if not os.path.isfile(train_csv):
    train_url = "http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/train.csv"
    urlretrieve (train_url, train_csv)
if not os.path.isfile(test_csv):
    test_url = "http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/test.csv"
    urlretrieve (test_url, test_csv)

Pandas

pandas is an open source library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.

One things pandas excels at is reading data from and writing data to pretty much any common format including CSV files, SQL databases, JSON, Excel files, MATLAB, etc.

The primary data structure provided by pandas is the DataFrame, which is sort of like a hybrid between an Excel spreadsheet and a SQL database table. It is VERY powerful in terms of the capabilities provided by this class. But there is a bit of a learning curve for newcomers.

Get the data with Pandas

Let's start with loading in the training and testing set into your Python environment. You will use the training set to build your model, and the test set to validate it. The data is stored on the web as csv files; their URLs are already available as character strings in the sample code. You can load this data with the read_csv() method from the Pandas library.


In [4]:
# Import the Pandas library
import pandas as pd

In [5]:
# Load the train and test datasets from the local CSV files to create two DataFrames
train = pd.read_csv(train_csv)
test = pd.read_csv(test_csv)

In [6]:
# Inspect the first few rows of the training dataset
train.head()


Out[6]:
PassengerId Survived Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry male 35.0 0 0 373450 8.0500 NaN S

In [7]:
# Inspect the first few rows of the test dataset
test.head()


Out[7]:
PassengerId Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 892 3 Kelly, Mr. James male 34.5 0 0 330911 7.8292 NaN Q
1 893 3 Wilkes, Mrs. James (Ellen Needs) female 47.0 1 0 363272 7.0000 NaN S
2 894 2 Myles, Mr. Thomas Francis male 62.0 0 0 240276 9.6875 NaN Q
3 895 3 Wirz, Mr. Albert male 27.0 0 0 315154 8.6625 NaN S
4 896 3 Hirvonen, Mrs. Alexander (Helga E Lindqvist) female 22.0 1 1 3101298 12.2875 NaN S

Understanding Your Data

Before starting with the actual analysis, it's important to understand the structure of your data. Both test and train are DataFrame objects, the way pandas represent datasets. You can easily explore a DataFrame using the .describe() method. .describe() summarizes the columns/features of the DataFrame, including the count of observations, mean, max and so on. Another useful trick is to look at the dimensions of the DataFrame. This is done by requesting the .shape attribute of your DataFrame object. (ex. your_data.shape)

The training and test set are already available in the workspace, as train and test. Apply .describe() method and print the .shape attribute of the training set.


In [8]:
# DataFrame.describe() generates various various summary statistics, automatically excluding NaN or missing values
train.describe()


Out[8]:
PassengerId Survived Pclass Age SibSp Parch Fare
count 891.000000 891.000000 891.000000 714.000000 891.000000 891.000000 891.000000
mean 446.000000 0.383838 2.308642 29.699118 0.523008 0.381594 32.204208
std 257.353842 0.486592 0.836071 14.526497 1.102743 0.806057 49.693429
min 1.000000 0.000000 1.000000 0.420000 0.000000 0.000000 0.000000
25% 223.500000 0.000000 2.000000 20.125000 0.000000 0.000000 7.910400
50% 446.000000 0.000000 3.000000 28.000000 0.000000 0.000000 14.454200
75% 668.500000 1.000000 3.000000 38.000000 1.000000 0.000000 31.000000
max 891.000000 1.000000 3.000000 80.000000 8.000000 6.000000 512.329200

In [9]:
# This means the training set has 891 observations with 12 variables each
train.shape


Out[9]:
(891, 12)

Female vs Male

How many people in your training set survived the disaster with the Titanic? To see this, you can use the value_counts() method in combination with standard bracket notation to select a single column of a DataFrame:

# absolute numbers
train["Survived"].value_counts()

# percentages
train["Survived"].value_counts(normalize = True)

If you run these commands in the console, you'll see that 549 individuals died (62%) and 342 survived (38%). A simple way to predict heuristically could be: "majority wins". This would mean that you will predict every unseen observation to not survive.

To dive in a little deeper we can perform similar counts and percentage calculations on subsets of the Survived column. For example, maybe gender could play a role as well? You can explore this using the .value_counts() method for a two-way comparison on the number of males and females that survived, with this syntax:

train["Survived"][train["Sex"] == 'male'].value_counts()
train["Survived"][train["Sex"] == 'female'].value_counts()

To get proportions, you can again pass in the argument normalize = True to the .value_counts() method.

The results below show that 81% of the men died, but only 26% of the women died. So gender matters tremendously.


In [10]:
# Passengers that survived vs passengers that passed away
print(train["Survived"].value_counts())

# As proportions
print(train["Survived"].value_counts(normalize=True))

# Males that survived vs males that passed away
print(train["Survived"][train["Sex"] == 'male'].value_counts())

# Females that survived vs Females that passed away
print(train["Survived"][train["Sex"] == 'female'].value_counts())

# Normalized male survival
print("\nMale survival rates:\n{}".format(train["Survived"][train["Sex"] == 'male'].value_counts(normalize=True)))

# Normalized female survival
print("\nFemale survival rates:\n{}".format(train["Survived"][train["Sex"] == 'female'].value_counts(normalize=True)))


0    549
1    342
Name: Survived, dtype: int64
0    0.616162
1    0.383838
Name: Survived, dtype: float64
0    468
1    109
Name: Survived, dtype: int64
1    233
0     81
Name: Survived, dtype: int64

Male survival rates:
0    0.811092
1    0.188908
Name: Survived, dtype: float64

Female survival rates:
1    0.742038
0    0.257962
Name: Survived, dtype: float64

Does age play a role?

Another variable that could influence survival is age; since it's probable that children were saved first. You can test this by creating a new column with a categorical variable Child. Child will take the value 1 in cases where age is less than 18, and a value of 0 in cases where age is greater than or equal to 18.

To add this new variable you need to do two things (i) create a new column, and (ii) provide the values for each observation (i.e., row) based on the age of the passenger.

Adding a new column with Pandas in Python is easy and can be done via the following syntax:

your_data["new_var"] = 0

This code would create a new column in the train DataFrame titled new_var with 0 for each observation.

To set the values based on the age of the passenger, you make use of a boolean test inside the square bracket operator. With the []-operator you create a subset of rows and assign a value to a certain variable of that subset of observations. For example,

train["new_var"][train["Fare"] > 10] = 1

would give a value of 1 to the variable new_var for the subset of passengers whose fares greater than 10. Remember that new_var has a value of 0 for all other values (including missing values).

The data below shows that 54% of children survived, but only 38% of adults survived. So yes, age does play a role.


In [11]:
# Added a new column called Child in the train data frame that initially takes the value 0 for all observations
train["Child"] = float(0)

# Assign 1 to passengers under 18
train.loc[train["Age"] < 18, "Child"] = 1

# Print normalized Survival Rates for passengers under 18
print("Survival rates for children:\n{}".format(train["Survived"][train["Child"] == 1].value_counts(normalize = True)))

# Print normalized Survival Rates for passengers 18 or older
print("\nSurvival rates for adults:\n{}".format(train["Survived"][train["Child"] == 0].value_counts(normalize = True)))


Survival rates for children:
1    0.539823
0    0.460177
Name: Survived, dtype: float64

Survival rates for adults:
0    0.638817
1    0.361183
Name: Survived, dtype: float64

Intro to Decision Trees

In the previous sections, you did all the slicing and dicing yourself to find subsets that have a higher chance of surviving. A decision tree automates this process for you and outputs a classification model or classifier.

Conceptually, the decision tree algorithm starts with all the data at the root node and scans all the variables for the best one to split on. Once a variable is chosen, you do the split and go down one level (or one node) and repeat. The final nodes at the bottom of the decision tree are known as terminal nodes, and the majority vote of the observations in that node determine how to predict for new observations that end up in that terminal node.

First, let's import the necessary libraries ...


In [12]:
# Import the Numpy library
import numpy as np

# Import 'tree' from scikit-learn library
from sklearn import tree

Cleaning and Formatting Your Data

Before you can begin constructing your trees you need to get your hands dirty and clean the data so that you can use all the features available to you. In the first chapter, we saw that the Age variable had some missing value. Missingness is a whole subject with and in itself, but we will use a simple imputation technique where we substitute each missing value with the median of the all present values.

train["Age"] = train["Age"].fillna(train["Age"].median())

Another problem is that the Sex and Embarked variables are categorical but in a non-numeric format. Thus, we will need to assign each class a unique integer so that Python can handle the information. Embarked also has some missing values which you should impute witht the most common class of embarkation, which is "S".


In [13]:
# Fill missing Age values with median age from training set
train["Age"] = train["Age"].fillna(train["Age"].median())
test["Age"] = test["Age"].fillna(train["Age"].median())

# Convert the male and female groups to integer form by replacing "male" with 0 and "female" with 1
train.loc[train["Sex"] == "male", "Sex"] = 0
train.loc[train["Sex"] == "female", "Sex"] = 1
test.loc[test["Sex"] == "male", "Sex"] = 0
test.loc[test["Sex"] == "female", "Sex"] = 1

# Print value counts for the Sex and Embarked columns
print(train["Sex"].value_counts())


0    577
1    314
Name: Sex, dtype: int64

In [14]:
# Impute the Embarked variable
train["Embarked"] = train["Embarked"].fillna("S")
test["Embarked"] = test["Embarked"].fillna("S")

# Convert the Embarked classes to integer form
train.loc[train["Embarked"] == "S", "Embarked"] = 0
train.loc[train["Embarked"] == "C", "Embarked"] = 1
train.loc[train["Embarked"] == "Q", "Embarked"] = 2
test.loc[test["Embarked"] == "S", "Embarked"] = 0
test.loc[test["Embarked"] == "C", "Embarked"] = 1
test.loc[test["Embarked"] == "Q", "Embarked"] = 2

print(train["Embarked"].value_counts())


0    646
1    168
2     77
Name: Embarked, dtype: int64

In [15]:
# Impute any missing values in Fare
train["Fare"] = train["Fare"].fillna(train["Fare"].median())
test["Fare"] = test["Fare"].fillna(train["Fare"].median())

Creating your first decision tree

You will use the scikit-learn and numpy libraries to build your first decision tree. scikit-learn can be used to create tree objects from the DecisionTreeClassifier class. The methods that we will use take numpy arrays as inputs and therefore we will need to create those from the DataFrame that we already have. We will need the following to build a decision tree

  • target: A one-dimensional numpy array containing the target/response from the train data. (Survival in your case)
  • features: A multidimensional numpy array containing the features/predictors from the train data. (ex. Sex, Age) Take a look at the sample code below to see what this would look like:

    target = train["Survived"].values

    features = train[["Sex", "Age"]].values

    my_tree = tree.DecisionTreeClassifier()

    my_tree = my_tree.fit(features, target)

One way to quickly see the result of your decision tree is to see the importance of the features that are included. This is done by requesting the .featureimportances attribute of your tree object. Another quick metric is the mean accuracy that you can compute using the .score() function with features_one and target as arguments.

Ok, time for you to build your first decision tree in Python!


In [16]:
# Create the target and features numpy arrays: target, features_one
target = train["Survived"].values
columns_one = ["Pclass", "Sex", "Age", "Fare"]
features_one = train[columns_one].values
features_one


Out[16]:
array([[3, 0, 22.0, 7.25],
       [1, 1, 38.0, 71.2833],
       [3, 1, 26.0, 7.925],
       ..., 
       [3, 1, 28.0, 23.45],
       [1, 0, 26.0, 30.0],
       [3, 0, 32.0, 7.75]], dtype=object)

In [17]:
# Fit your first decision tree: my_tree_one
my_tree_one = tree.DecisionTreeClassifier()
my_tree_one = my_tree_one.fit(features_one, target)

# Look at the importance and score of the included features
print(my_tree_one.feature_importances_)
print(my_tree_one.score(features_one, target))


[ 0.12629524  0.31274009  0.23147427  0.32949039]
0.977553310887

Interpreting your decision tree

The featureimportances attribute make it simple to interpret the significance of the predictors you include. Based on your decision tree, what variable plays the most important role in determining whether or not a passenger survived?

Based on this decision tree, the Fare variable plays the most important role, but it is nearly tied with Age in importance.

Based on the score, we can see that our decision tree fit based on the training set predicts approximately 98% of the values in the training set correctly.

Predict and submit to Kaggle

To send a submission to Kaggle you need to predict the survival rates for the observations in the test set. Luckily, with our decision tree, we can make use of some simple functions to "generate" our answer without having to manually perform subsetting.

First, you make use of the .predict() method. You provide it the model (my_tree_one), the values of features from the dataset for which predictions need to be made (test). To extract the features we will need to create a numpy array in the same way as we did when training the model. However, we need to take care of a small but important problem first. There is a missing value in the Fare feature that needs to be imputed.

Next, you need to make sure your output is in line with the submission requirements of Kaggle: a csv file with exactly 418 entries and two columns: PassengerId and Survived. Then use the code provided to make a new data frame using DataFrame(), and create a csv file using to_csv() method from Pandas.


In [18]:
# Make sure solution directory exists
solution_dir = 'solutions/kaggle/titanic'
if not os.path.isdir(solution_dir):
    os.makedirs(solution_dir)

In [19]:
# Impute the missing value with the median
test.Fare.fillna(test.Fare.median())

# Extract the features from the test set: Pclass, Sex, Age, and Fare.
test_features = test[columns_one].values

# Make your prediction using the test set
my_prediction = my_tree_one.predict(test_features)

# Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions
PassengerId = np.array(test["PassengerId"]).astype(int)
my_solution = pd.DataFrame(my_prediction, PassengerId, columns = ["Survived"])

# Check that your data frame has 418 entries
print(my_solution.shape)

# Write your solution to a csv file
my_solution.to_csv(os.path.join(solution_dir, "decision_tree_one.csv"), index_label = ["PassengerId"])


(418, 1)

How well does that first decision tree do on the test set?

This basic solution achieves a score of 0.75120 on the test set. So it predicts approximately 75% of the values in the test set correctly. But from earlier we saw that this decision tree predicted approximately 98% of the values in the training set correctly?

So why did we do so much better predicting values in the training set as compared to the test set? Overfitting

Overfitting and how to control it

When you created your first decision tree the default arguments for max_depth and min_samples_split were set to None. This means that no limit on the depth of your tree was set. That's a good thing right? Not so fast. We are likely overfitting. This means that while your model describes the training data extremely well, it doesn't generalize to new data, which is frankly the point of prediction. Just look at the Kaggle submission results for the simple model based on Gender and the complex decision tree. Which one does better?

  • A gender-only based model achieves a score of 0.765, which is better than our first decision tree, ouch!

Maybe we can improve the overfit model by making a less complex model? In DecisionTreeRegressor, the depth of our model is defined by two parameters: - the max_depth parameter determines when the splitting up of the decision tree stops. - the min_samples_split parameter monitors the amount of observations in a bucket. If a certain threshold is not reached (e.g minimum 10 passengers) no further splitting can be done.

By limiting the complexity of your decision tree you will increase its generality and thus its usefulness for prediction!

It may also help to add additional features.


In [20]:
# Create a new array with the added features: features_two
columns_two = ["Pclass","Age","Sex","Fare", "SibSp", "Parch", "Embarked"]
features_two = train[columns_two].values

#Control overfitting by setting "max_depth" to 10 and "min_samples_split" to 5 : my_tree_two
max_depth = 10
min_samples_split = 5
my_tree_two = tree.DecisionTreeClassifier(max_depth = max_depth, min_samples_split = min_samples_split, random_state = 1)
my_tree_two.fit(features_two, target)

#Print the score of the new decison tree
print(my_tree_two.score(features_two, target))


0.905723905724

In [21]:
# Make your prediction using the test set
test_features_two = test[columns_two].values
predition_two = my_tree_two.predict(test_features_two)
predition_two.shape


Out[21]:
(418,)

In [22]:
# Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions
solution_two = pd.DataFrame(predition_two, PassengerId, columns = ["Survived"])

# Check that your data frame has 418 entries
print(solution_two.shape)

# Write your solution to a csv file with the name my_solution.csv
solution_two.to_csv(os.path.join(solution_dir,"decision_tree_two.csv"), index_label = ["PassengerId"])


(418, 1)

How well did our attempts at preventing overfitting help?

This second solution achieves a higher score of 0.76555 on the test set, even though it had a lower score of 0.906 on the training set. So it predicts approximately 76.6% of the values in the test set correctly.

This is a little bit better than before we attempted to prevent overfitting. But it is still only par with a pure gender-based model. So there is still a lot of room for improvement.

How can we do better? One way is to spend a little bit of time on feature engineering ...

Feature-engineering for our Titanic data set

Data Science is an art that benefits from a human element. Enter feature engineering: creatively engineering your own features by combining the different existing variables.

While feature engineering is a discipline in itself, too broad to be covered here in detail, you will have a look at a simple example by creating your own new predictive attribute: family_size.

A valid assumption is that larger families need more time to get together on a sinking ship, and hence have lower probability of surviving. Family size is determined by the variables SibSp and Parch, which indicate the number of family members a certain passenger is traveling with. So when doing feature engineering, you add a new variable family_size, which is the sum of SibSp and Parch plus one (the observation itself), to the test and train set.


In [23]:
# Create train_two with the newly defined feature
train_two = train.copy()
train_two["family_size"] = train_two["SibSp"] +  train_two["Parch"] + 1

cols_three = ["Pclass", "Sex", "Age", "Fare", "SibSp", "Parch", "Embarked", "family_size", "Child"]
# Create a new feature set and add the new feature
features_three = train_two[cols_three].values


#Control overfitting by setting "max_depth" to 9 and "min_samples_split" to 6
max_depth = 9
min_samples_split = 6

# Define the tree classifier, then fit the model
my_tree_three = tree.DecisionTreeClassifier(max_depth = max_depth, min_samples_split = min_samples_split, random_state = 1)
my_tree_three.fit(features_three, target)

# Print the score of this decision tree
print(my_tree_three.score(features_three, target))


0.890011223345

In [24]:
# Make your prediction using the test set
test_two = test.copy()

test_two["Child"] = float(0)
test_two.loc[test_two["Age"] < 18, "Child"] = 1

test_two["family_size"] = test_two["SibSp"] +  test_two["Parch"] + 1
test_features_three = test_two[cols_three].values
predition_three = my_tree_three.predict(test_features_three)
predition_three.shape


Out[24]:
(418,)

In [25]:
# Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions
solution_three = pd.DataFrame(predition_three, PassengerId, columns = ["Survived"])

# Check that your data frame has 418 entries
print(solution_three.shape)

# Write your solution to a csv file with the name my_solution.csv
solution_three.to_csv(os.path.join(solution_dir, "decision_tree_three.csv"), index_label = ["PassengerId"])


(418, 1)

This new model has a score of 0.7703, so it predicts about 77% of test set data correctly.

How do we do better?

The Random Forest technique handles the overfitting problem faced with decision trees. It grows multiple (very deep) classification trees using the training set. At the time of prediction, each tree is used to come up with a prediction and every outcome is counted as a vote. For example, if you have trained 3 trees with 2 saying a passenger in the test set will survive and 1 says he will not, the passenger will be classified as a survivor. This approach of overtraining trees, but having the majority's vote count as the actual classification decision, avoids overfitting.

Building a random forest in Python looks almost the same as building a decision tree; so we can jump right to it. There are two key differences, however. Firstly, a different class is used. And second, a new argument is necessary. Also, we need to import the necessary library from scikit-learn.


In [ ]: