Three Pandas Tips For Noobs

For those new to Pandas, you'll learn a number of tips that will help with your data engineering and analysis tasks. You may find these buried in the documentation or StackOverflow posts, but I'm consolidating them here for you.

Here's what's covered:

  1. Ensuring changes you make to DataFrames stick
  2. Applying a function with no arguments to a DataFrame
  3. Applying a function with arguments to a DataFrame

Here's the link to the original dataset we're using:

Additional Resources

This and much more is covered in my upcoming book: Python Business Intelligence Cookbook, now available for pre-order from Packt Publishing.

Import The Data

The first thing we need to do is import the data into a DataFrame. I suggest using the read_csv() method from Pandas for this.


In [1]:
# Import the Python libraries we need
import pandas as pd

In [2]:
# Define a variable for the accidents data file
f = './data/accidents1k.csv'

# Use read_csv() to import the data
accidents = pd.read_csv(f,
                        sep=',',
                        header=0,
                        index_col=False,
                        parse_dates=True,
                        tupleize_cols=False,
                        error_bad_lines=False,
                        warn_bad_lines=True,
                        skip_blank_lines=True,
                        low_memory=False
                        )
# Run the head() command to see the top 5 rows of the data
accidents.head()


Out[2]:
Unnamed: 0 Accident_Index Location_Easting_OSGR Location_Northing_OSGR Longitude Latitude Police_Force Accident_Severity Number_of_Vehicles Number_of_Casualties ... Pedestrian_Crossing-Human_Control Pedestrian_Crossing-Physical_Facilities Light_Conditions Weather_Conditions Road_Surface_Conditions Special_Conditions_at_Site Carriageway_Hazards Urban_or_Rural_Area Did_Police_Officer_Attend_Scene_of_Accident LSOA_of_Accident_Location
0 0 197901A11AD14 NaN NaN NaN NaN 1 3 2 1 ... -1 -1 1 8 1 -1 0 -1 -1 NaN
1 1 197901A1BAW34 198460 894000 NaN NaN 1 3 1 1 ... -1 -1 4 8 3 -1 0 -1 -1 NaN
2 2 197901A1BFD77 406380 307000 NaN NaN 1 3 2 3 ... -1 -1 4 8 3 -1 0 -1 -1 NaN
3 3 197901A1BGC20 281680 440000 NaN NaN 1 3 2 2 ... -1 -1 4 8 3 -1 0 -1 -1 NaN
4 4 197901A1BGF95 153960 795000 NaN NaN 1 2 2 1 ... -1 -1 4 3 3 -1 0 -1 -1 NaN

5 rows × 33 columns

1. Ensuring Your Changes Stick

There are many ways to fill in missing (NaN) values in a DataFrame; some people use the mean of the column, others enter 0. You can do whatever you want. However, just because you tell Pandas to fill in the missing values doesn't mean the change will stick.

Let's use the fillna() method of the DataFrame and see what happens.


In [3]:
# Fill in the NaN values and check the DataFrame
accidents.fillna(value=0).head()


Out[3]:
Unnamed: 0 Accident_Index Location_Easting_OSGR Location_Northing_OSGR Longitude Latitude Police_Force Accident_Severity Number_of_Vehicles Number_of_Casualties ... Pedestrian_Crossing-Human_Control Pedestrian_Crossing-Physical_Facilities Light_Conditions Weather_Conditions Road_Surface_Conditions Special_Conditions_at_Site Carriageway_Hazards Urban_or_Rural_Area Did_Police_Officer_Attend_Scene_of_Accident LSOA_of_Accident_Location
0 0 197901A11AD14 0 0 0 0 1 3 2 1 ... -1 -1 1 8 1 -1 0 -1 -1 0
1 1 197901A1BAW34 198460 894000 0 0 1 3 1 1 ... -1 -1 4 8 3 -1 0 -1 -1 0
2 2 197901A1BFD77 406380 307000 0 0 1 3 2 3 ... -1 -1 4 8 3 -1 0 -1 -1 0
3 3 197901A1BGC20 281680 440000 0 0 1 3 2 2 ... -1 -1 4 8 3 -1 0 -1 -1 0
4 4 197901A1BGF95 153960 795000 0 0 1 2 2 1 ... -1 -1 4 3 3 -1 0 -1 -1 0

5 rows × 33 columns

Hrm, it looks like the DataFrame is updated, but is it? I think not!


In [4]:
accidents.head()


Out[4]:
Unnamed: 0 Accident_Index Location_Easting_OSGR Location_Northing_OSGR Longitude Latitude Police_Force Accident_Severity Number_of_Vehicles Number_of_Casualties ... Pedestrian_Crossing-Human_Control Pedestrian_Crossing-Physical_Facilities Light_Conditions Weather_Conditions Road_Surface_Conditions Special_Conditions_at_Site Carriageway_Hazards Urban_or_Rural_Area Did_Police_Officer_Attend_Scene_of_Accident LSOA_of_Accident_Location
0 0 197901A11AD14 NaN NaN NaN NaN 1 3 2 1 ... -1 -1 1 8 1 -1 0 -1 -1 NaN
1 1 197901A1BAW34 198460 894000 NaN NaN 1 3 1 1 ... -1 -1 4 8 3 -1 0 -1 -1 NaN
2 2 197901A1BFD77 406380 307000 NaN NaN 1 3 2 3 ... -1 -1 4 8 3 -1 0 -1 -1 NaN
3 3 197901A1BGC20 281680 440000 NaN NaN 1 3 2 2 ... -1 -1 4 8 3 -1 0 -1 -1 NaN
4 4 197901A1BGF95 153960 795000 NaN NaN 1 2 2 1 ... -1 -1 4 3 3 -1 0 -1 -1 NaN

5 rows × 33 columns

What the heck?! The missing values haven't actually been updated. So how do we make the change stick? Using the inplace=True argument like so...


In [5]:
# Fill the NaN values and ensure the DataFrame is indeed updated.
accidents.fillna(value=0,
                 inplace=True)

accidents.head()


Out[5]:
Unnamed: 0 Accident_Index Location_Easting_OSGR Location_Northing_OSGR Longitude Latitude Police_Force Accident_Severity Number_of_Vehicles Number_of_Casualties ... Pedestrian_Crossing-Human_Control Pedestrian_Crossing-Physical_Facilities Light_Conditions Weather_Conditions Road_Surface_Conditions Special_Conditions_at_Site Carriageway_Hazards Urban_or_Rural_Area Did_Police_Officer_Attend_Scene_of_Accident LSOA_of_Accident_Location
0 0 197901A11AD14 0 0 0 0 1 3 2 1 ... -1 -1 1 8 1 -1 0 -1 -1 0
1 1 197901A1BAW34 198460 894000 0 0 1 3 1 1 ... -1 -1 4 8 3 -1 0 -1 -1 0
2 2 197901A1BFD77 406380 307000 0 0 1 3 2 3 ... -1 -1 4 8 3 -1 0 -1 -1 0
3 3 197901A1BGC20 281680 440000 0 0 1 3 2 2 ... -1 -1 4 8 3 -1 0 -1 -1 0
4 4 197901A1BGF95 153960 795000 0 0 1 2 2 1 ... -1 -1 4 3 3 -1 0 -1 -1 0

5 rows × 33 columns

Success! The DataFrame has now been updated.

2. Applying a Function With No Arguments to a DataFrame

One of the reasons Pandas rocks is that you can apply a function to either a single column of a DataFrame or an entire DataFrame, using the apply() function. You'll be using this often, so here's how.


In [6]:
# Let's take a look at the Date column
accidents['Date'].head()


Out[6]:
0    18/01/1979
1    01/01/1979
2    01/01/1979
3    01/01/1979
4    01/01/1979
Name: Date, dtype: object

According to Pandas, the Date is an object, meaning it doesn't actually see it as a date. Let's change that.


In [7]:
# Define a function to convert a string to a date.

def convert_string_to_date(s):
    """
    Given a string, use the to_datetime function of Pandas to convert
    it to a datetime, and then return it.
    """
    return pd.to_datetime(s)

In [8]:
# Apply the function to the Data column using the apply() function.
# Note: we do not have to explicitly pass in the value in the row being processed.
accidents['Date'] = accidents['Date'].apply(convert_string_to_date)

# Let's check it out.
accidents['Date'].head()


Out[8]:
0   1979-01-18
1   1979-01-01
2   1979-01-01
3   1979-01-01
4   1979-01-01
Name: Date, dtype: datetime64[ns]

Voila! Our data column is now a datetime.

3. Applying a Function With Arguments to a DataFrame

Along with applying a function to a single column, another common task is to create an additional column based on the values in two or more columns. In order to do that, we need create a function that takes multiple parameters, and then apply it to the DataFrame.

We'll be using the same apply() function we used in the previous tip, plus a little lambda magic.


In [9]:
# Create a few dicts and a DataFrame to hold the mappings for the accident data

# Accident severity
severity = {
    1: 'fatal',
    2: 'serious',
    3: 'fairly serious'
}

# Day of Week
days_of_week = {
    1: 'Sunday',
    2: 'Monday',
    3: 'Tuesday',
    4: 'Wednesday',
    5: 'Thursday',
    6: 'Friday',
    7: 'Saturday',
    0: 'Earlier this week'
}

# Road surfaces, updated to fit the sensationalism of a news broadcast
road_surfaces = {
    1: 'dry',
    2: 'wet',
    3: 'snow-covered',
    4: 'frosty',
    5: 'flooded',
    6: 'oily',
    7: 'muddy',
    -1: 'Data missing or out of range',
}

# Local Authority (District) - create a DataFrame from the CSV file
f = './data/accidents1k.csv'

# Use read_csv() to create a DataFrame from the local_authority_district mapping tab of the data dictionary.
# There are almost 1000 districts, hence I put them into a CSV file.
districts = pd.read_csv('./data/local_authority_district.csv',
                        sep=',',
                        header=0,
                        index_col=0,
                        parse_dates=False,
                        tupleize_cols=False,
                        error_bad_lines=False,
                        warn_bad_lines=True,
                        skip_blank_lines=True,
                        low_memory=False
                        )

In [10]:
# Define a function to create a one-sentence summary of the record.
def create_summary(day_of_week, accident_severity, road_surface, local_authority_district):
    """
    Create a one-sentence summary of the record.
    Parameters: integer values for the Day_of_Week, Accident_Severity,
                Road_Surface_Conditions and Local_Authority_(District) columns
    """
    
    # Perform the value lookups in the dicts and DataFrame
    dow = days_of_week[day_of_week]
    sev = severity[accident_severity]
    road = road_surfaces[road_surface]
    lad = districts.loc[local_authority_district].label
    
    # If the day of week was specified use the first sentence variation, otherwise use the second
    # Yes, this is redundant and we could optimize it. I leave that to you!
    if day_of_week != 0:
        return "On {} a {} accident occured on a {} road in {}".format(dow, sev, road, lad)
    else:
        return "{} a {} accident occured on a {} road in {}".format(dow, sev, road, lad)

In [11]:
# Create a new column in the DataFrame and fill it with the summary produced by the create_summary function
# Pass in the parameters needed to create the summary
accidents['summary'] = accidents.apply(lambda x: create_summary(x['Day_of_Week'],
                                                                x['Accident_Severity'],
                                                                x['Road_Surface_Conditions'],
                                                                x['Local_Authority_(District)']), axis=1)

# Let's see some results!
accidents['summary'].head()


Out[11]:
0    On Thursday a fairly serious accident occured ...
1    On Monday a fairly serious accident occured on...
2    On Monday a fairly serious accident occured on...
3    On Monday a fairly serious accident occured on...
4    On Monday a serious accident occured on a snow...
Name: summary, dtype: object

In [12]:
# Let's view an entire summary
accidents['summary'][0]


Out[12]:
'On Thursday a fairly serious accident occured on a dry road in Hammersmith and Fulham'

The lambda function can easily throw you for a curve. For more information on what they are and how to use them check out the Python Tutorial: Lambda, Filter, Reduce and Map.

And Go!

With these three tips you're well on your way to data engineering your day away.