For those new to Pandas, you'll learn a number of tips that will help with your data engineering and analysis tasks. You may find these buried in the documentation or StackOverflow posts, but I'm consolidating them here for you.
Here's what's covered:
Here's the link to the original dataset we're using:
This and much more is covered in my upcoming book: Python Business Intelligence Cookbook, now available for pre-order from Packt Publishing.
The first thing we need to do is import the data into a DataFrame. I suggest using the read_csv() method from Pandas for this.
In [1]:
# Import the Python libraries we need
import pandas as pd
In [2]:
# Define a variable for the accidents data file
f = './data/accidents1k.csv'
# Use read_csv() to import the data
accidents = pd.read_csv(f,
sep=',',
header=0,
index_col=False,
parse_dates=True,
tupleize_cols=False,
error_bad_lines=False,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
# Run the head() command to see the top 5 rows of the data
accidents.head()
Out[2]:
There are many ways to fill in missing (NaN) values in a DataFrame; some people use the mean of the column, others enter 0. You can do whatever you want. However, just because you tell Pandas to fill in the missing values doesn't mean the change will stick.
Let's use the fillna() method of the DataFrame and see what happens.
In [3]:
# Fill in the NaN values and check the DataFrame
accidents.fillna(value=0).head()
Out[3]:
Hrm, it looks like the DataFrame is updated, but is it? I think not!
In [4]:
accidents.head()
Out[4]:
What the heck?! The missing values haven't actually been updated. So how do we make the change stick? Using the inplace=True argument like so...
In [5]:
# Fill the NaN values and ensure the DataFrame is indeed updated.
accidents.fillna(value=0,
inplace=True)
accidents.head()
Out[5]:
Success! The DataFrame has now been updated.
One of the reasons Pandas rocks is that you can apply a function to either a single column of a DataFrame or an entire DataFrame, using the apply() function. You'll be using this often, so here's how.
In [6]:
# Let's take a look at the Date column
accidents['Date'].head()
Out[6]:
According to Pandas, the Date is an object, meaning it doesn't actually see it as a date. Let's change that.
In [7]:
# Define a function to convert a string to a date.
def convert_string_to_date(s):
"""
Given a string, use the to_datetime function of Pandas to convert
it to a datetime, and then return it.
"""
return pd.to_datetime(s)
In [8]:
# Apply the function to the Data column using the apply() function.
# Note: we do not have to explicitly pass in the value in the row being processed.
accidents['Date'] = accidents['Date'].apply(convert_string_to_date)
# Let's check it out.
accidents['Date'].head()
Out[8]:
Voila! Our data column is now a datetime.
Along with applying a function to a single column, another common task is to create an additional column based on the values in two or more columns. In order to do that, we need create a function that takes multiple parameters, and then apply it to the DataFrame.
We'll be using the same apply() function we used in the previous tip, plus a little lambda magic.
In [9]:
# Create a few dicts and a DataFrame to hold the mappings for the accident data
# Accident severity
severity = {
1: 'fatal',
2: 'serious',
3: 'fairly serious'
}
# Day of Week
days_of_week = {
1: 'Sunday',
2: 'Monday',
3: 'Tuesday',
4: 'Wednesday',
5: 'Thursday',
6: 'Friday',
7: 'Saturday',
0: 'Earlier this week'
}
# Road surfaces, updated to fit the sensationalism of a news broadcast
road_surfaces = {
1: 'dry',
2: 'wet',
3: 'snow-covered',
4: 'frosty',
5: 'flooded',
6: 'oily',
7: 'muddy',
-1: 'Data missing or out of range',
}
# Local Authority (District) - create a DataFrame from the CSV file
f = './data/accidents1k.csv'
# Use read_csv() to create a DataFrame from the local_authority_district mapping tab of the data dictionary.
# There are almost 1000 districts, hence I put them into a CSV file.
districts = pd.read_csv('./data/local_authority_district.csv',
sep=',',
header=0,
index_col=0,
parse_dates=False,
tupleize_cols=False,
error_bad_lines=False,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
In [10]:
# Define a function to create a one-sentence summary of the record.
def create_summary(day_of_week, accident_severity, road_surface, local_authority_district):
"""
Create a one-sentence summary of the record.
Parameters: integer values for the Day_of_Week, Accident_Severity,
Road_Surface_Conditions and Local_Authority_(District) columns
"""
# Perform the value lookups in the dicts and DataFrame
dow = days_of_week[day_of_week]
sev = severity[accident_severity]
road = road_surfaces[road_surface]
lad = districts.loc[local_authority_district].label
# If the day of week was specified use the first sentence variation, otherwise use the second
# Yes, this is redundant and we could optimize it. I leave that to you!
if day_of_week != 0:
return "On {} a {} accident occured on a {} road in {}".format(dow, sev, road, lad)
else:
return "{} a {} accident occured on a {} road in {}".format(dow, sev, road, lad)
In [11]:
# Create a new column in the DataFrame and fill it with the summary produced by the create_summary function
# Pass in the parameters needed to create the summary
accidents['summary'] = accidents.apply(lambda x: create_summary(x['Day_of_Week'],
x['Accident_Severity'],
x['Road_Surface_Conditions'],
x['Local_Authority_(District)']), axis=1)
# Let's see some results!
accidents['summary'].head()
Out[11]:
In [12]:
# Let's view an entire summary
accidents['summary'][0]
Out[12]:
The lambda function can easily throw you for a curve. For more information on what they are and how to use them check out the Python Tutorial: Lambda, Filter, Reduce and Map.
With these three tips you're well on your way to data engineering your day away.