Introduction to data analytics with pandas

Quentin Caudron



@QuentinCaudron

You can ignore this next cell, it's only for presentations !


In [ ]:
%%HTML
<style>

.rendered_html {
  font-size: 0.7em;
}

.CodeMirror-scroll {
    font-size: 1.2em;
}

.rendered_html table, .rendered_html th, .rendered_html tr, .rendered_html td, .rendered_html h2, .rendered_html h4 {
     font-size: 100%;
}

</style>

Systems check

Do you have a working Python installation, with the pandas package ?


In [1]:
import pandas as pd

Note : This cell should run without raising a traceback. Assuming it runs, you can also try printing the value of pd.__version__ to see what version of pandas you have installed.

A little about me

  • Lapsed computational physicist
  • PhD computational neuroscience, postdoc statistical epidemiology
  • Data Scientist at CBRE - www.cbredev.com
  • ATOM in Seattle

A little about the hero of this story

We'll be analysing a real-world dataset together. It's about my favourite thing in the world : coffee. This dataset was collected at the Mathematics Institute at the University of Warwick. It's a time-series dataset, describing the total number of coffees made by our espresso machine by a certain date.

A little about this workshop

We'll be running through an analysis of this dataset as a way to expose you to the pandas API. The aim is to develop a little familiarity with how to work with pandas.

Slides are available at https://github.com/QCaudron/pydata_pandas. One notebook contains solutions; beware of spoilers.

The notebooks contain notes about what we're doing that I'll skip during this workshop, but try to explain on the way.

The pandas API is enormous. The documentation is excellent, don't hesitate to look things up.

Key questions

  1. Who are the main contributors to this dataset, and when are contributions generally made ?
  2. What are the department's weekday coffee habits ?
  3. How much coffee are people drinking ?

Let's begin


In [2]:
import pandas as pd
%matplotlib inline

Note : The second line here tells matplotlib to plot directly under the cell where any plotting code is called. pandas uses matplotlib to generate graphs, and without this, the graphs would appear outside the Jupyter notebook when you called plt.show() - but we just want them to appear without having to do this.

http://ipython.readthedocs.io/en/stable/interactive/plotting.html#id1

Importing the data

Let's import the coffee data from CSV.


In [ ]:
# Read data from data/coffees.csv
data =

Note : pandas can read from many data formats : CSV, JSON, Excel, HDF5, SQL, and more.

http://pandas.pydata.org/pandas-docs/version/0.20/io.html

What does this data look like ?


In [ ]:

Let's just look at the first few rows.


In [ ]:
# .head()

We have an index, and three columns : timestamp, coffees, and contributor.

Uh-oh. Why is there a string of text, testing, in our coffee numbers ? What's going on in the coffees column in the row after that ?

Note : df.head(n=10) would show the first ten rows. The default is n=5.

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html

Let's look at that string in the third row.


In [ ]:
# .loc or .iloc

Definitely a string. We'll note this as something to fix after we finish looking around.

Note : .loc uses a label-based lookup, which means that the value you pass into the square brackets must be in the index. Another method, .iloc, is integer-location-based, so .iloc[2] would return the third row. In this case, they're the same, but had we changed our index, as we'll see later, things would work differently.

Indexing a dataframe with [] directly returns a pd.Series or pd.DataFrame by searching over columns, not rows. Indexing a pd.Series with [] is like indexing a dataframe with .iloc.

https://pandas.pydata.org/pandas-docs/stable/indexing.html

We should also take a look at that NaN. In fact, let's look at the first five values in coffees.


In [ ]:
# [] indexing on a series

Note : here, we're indexing a series ( a pd.Series object ). From a pd.DataFrame ( here, data ), when you access a single column ( data.coffees or data["coffees"] ), the object returned is a pd.Series. From that, indexing directly with [] works in an integer-location-based manner, and like with numpy arrays, you can take slices ( [:5] ).

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html

How long is the dataset ?


In [ ]:
print("Dataset length :")

# len()
print()

What else can we find out ?


In [ ]:
# .describe()

Looks like we also have some missing data - we have 671 rows, but the coffees column only has 658 entries.

Note : .describe() returns different things based on what's in the dataframe, as we'll see later. For numerical columns, it will return things like the mean, standard deviation, and percentiles. For object columns ( strings or datetimes ), it will return the most frequent entry and the first and last items. For all columns, .describe() will return the count of objects in that column ( not counting NaNs ) and the unique number of entries. You can determine what's returned using .describe()'s keyword arguments.

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html

Let's look at the dataframe where coffees is null.


In [ ]:
# .isnull() and boolean indexing with []

Note : .isnull() returns a boolean array ( an array of Trues and Falses ), that you can then use to index the dataframe directly. Here, our boolean array tells us which entries in the coffees column are null, and we use that to index against the full dataframe - so we get back every column in the dataframe, but only those rows where coffees is null.

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html

What type of Python objects are the columns ?


In [ ]:
# .dtypes

The contributor column makes sense as object, because we expect strings there; but surely the timestamp should be a timestamp-type, and coffees should be numerical ?

Let's inspect what's in the timestamp column.


In [ ]:
# print the first element of the series with [] indexing
print()

# print its type()
print()

It looks like the timestamp field was read from CSV as a string. That makes sense - CSV files are very basic. We'll have pandas interpret these strings as datetimes for us automatically.

Note : here's an example of using direct [] indexing on a pd.Series. We're accessing the first entry, just to see what type of object we have there.

On our first pass, what problems did we find ?

  • The timestamp column contains strings; these need to be datetimes
  • The coffees column contains some null values and at least one string

Cleaning the data

The coffees column should only contain numerical data.


In [ ]:
# cast the coffees column using pd.to_numeric, and coerce errors
data.coffees = 

data.head()

The coffees column contains NaNs.


In [ ]:
# Use .dropna() using a subset, and pass inplace


data.head()

The coffees column is of type float.


In [ ]:
# Cast to int using .astype()
data.coffees = 

data.head()

Let's have pandas parse the timestamp strings to datetime objects.


In [ ]:
# pd.to_datetime()
data.timestamp = 

# Confirm dtypes
data.dtypes

So where do we stand ?


In [ ]:
# .describe(), passing the include kwarg to see all information

In [ ]:
# What do the first few rows look like ?

Note : .describe(include="all") is describing all attributes of all columns, but some don't make sense based on the column's dtype. For example, the contributor column has no first and last attributes, because those describe the first and last entries in an ordered series. That makes sense for the timestamp - those have an intuitive definition of sorting - but not so much for strings ( alphabetical order doesn't really matter when they're arbitrary strings ). Similary, the timestamp column has no mean or other numerical traits. What does it mean to calculate the mean timestamp ?

The time-series at a glance

Let's begin by visualising the coffee counts.


In [ ]:
# .plot() on the coffees series

pandas is plotting the coffees against the index, which is just a series of integers.

Note : .plot() on a pd.Series will plot the data against the index. On a pd.DataFrame, the .plot() method allows plotting of one column against another.

By default, .plot() renders a line graph, but you can specify which type of plot you'd like - bar, line, histogram, area, scatter, etc..

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.html

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html

Let's use the dataframe's plot() method rather than that of the series.


In [ ]:
# .plot() on the dataframe, 
# pass x kwarg to plot against the timestamp
# use a dot-dash style

We have some very uneven spacing in places. We might start by cutting off the last few points of this time-series, which is missing a lot of data.

We'll inspect the last few points of this time-series.


In [ ]:
# .tail() with ten rows

After mid-March, things start getting spaced rather erratically.

Let's cut off the tail of the time-series, anything after 2013-03-01.


In [ ]:
# Use conditional indexing against the timestamp
data = 
data.tail()

Note : this is another example of boolean indexing. data.timestamp < "2013-03-01" is a boolean array, and can be passed into the dataframe immediately in [], much like with a np.ndarray.

One final look.


In [ ]:
# Once again, plot the data against the timestamp

1. Contributions to the time-series

Who are our main contributors ?


In [ ]:
# .value_counts()

Note : .value_counts() counts the unique values in a series. It's similar to doing a .groupby() followed by a .count(), as we'll see soon.

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html

Let's plot this.


In [ ]:
# .plot() a bar chart of the value counts

On which weekdays were contributions made ?


In [ ]:
# Create a series of the weekdays
# for each entry using .dt.weekday
weekdays = 

# .assign() it to our dataframe
data = 

data.head()

Can we replace these integers with actual weekdays ?


In [ ]:
weekday_names = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
weekday_dict = {key: weekday_names[key] for key in range(7)}

def day_of_week(idx):
    return weekday_dict[idx]

# Use .apply() to apply a custom function to the weekdays column
data.weekdays = 

data.head()

Let's group by these weekdays.


In [ ]:
# .groupby() the weekdays and then .count() rows in each group
weekday_counts = 

# We can reorder this dataframe by our weekday_names 
# list using .loc, indexing with the names
weekday_counts = 

weekday_counts

Note : this first line could be replaced by weekday_counts = data.weekdays.value_counts(), with the only difference being that that would return a series to us, and here, we got back a dataframe.

We can now visualise these weekday counts.


In [ ]:
# Plot a bar chart of the coffees data in weekday_counts
# Title : "Datapoints added on each weekday"

First, we'll set our timestamps to the dataframe's index


In [ ]:
# Set the dataframe's .index property
data.index = 

# Let's drop the timestamp column, as we no longer need it


data.head()

Let's add some rows at midnight on every day.


In [ ]:
# pd.date_range, with daily frequency, and normalisation
midnights = 

midnights

Note : pd.date_range creates a fixed-frequency DatetimeIndex. normalize=True ensures these datetimes are at midnight, and not at whatever time the starting point is.

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html

Let's take the union of this index and our dataset's index.


In [ ]:
# Take the union of the existing and new indices
new_index = 

new_index

Note : the union of these indices is just a new index where entries from both indices are present. It's sorted by time.

Now we can reindex our dataframe with this new index.


In [ ]:
# .reindex() the dataframe to get an upsampled dataframe
upsampled_data = 

upsampled_data.head(10)

Note : .reindex() keeps any values that conform to the new index, and inserts NaNs where we have no values.

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html

We can fill in these NaNs using interpolation.


In [ ]:
# .interpolate the upsampled_data using the time method
upsampled_data = 

upsampled_data.head(10)

We're now ready to resample the time-series at a daily frequency.


In [ ]:
# .resample() the upsampled dataframe,
# using .asfreq() to get only exactly daily values
daily_data = 

# Drop the contributor column, we no longer need it
daily_data = 

# Generate a column of weekday_names
daily_data["weekdays"] = 

daily_data.head()

In [ ]:
# Let's plot the data once more, to see how we're doing

Let's begin by figuring out how many coffees are made on any given day.


In [ ]:
# Use .diff() on the coffees column; follow up with .shift()
coffees_made = 

# Add this as a column to the dataframe
daily_data["coffees_made_today"] = 

daily_data.head(n=10)

Note : we use .shift() here because if we look at the .diff() between a Monday and a Tuesday, those coffees are attributed to the Tuesday. However, what we want to say is "this many coffees were made at some point on the Monday", so we shift the entire series up one.

Now we can group this by weekday.


In [ ]:
# .groupby weekdays, take the mean, and
# grab the coffees_made_today column
coffees_by_day = 

coffees_by_day

Let's order this series and then plot it.


In [ ]:
# Sort coffees_by_day by our list of weekend names
coffees_by_day = 

# Plot a bar chart

Wednesdays was seminar day...

3. Coffee per person

We can now pull in data on how many people were in the department.


In [ ]:
# Bring in data/department_members.csv;
# have the first column be the index, and parse the dates
people = 

people.head()

Let's join the datasets.


In [ ]:
# Use an outer join, then interpolate over 
# missing values using nearest values
daily_data = 

daily_data.head(n=15)

Note : by default, inner joins are performed. That is, if a row from one of the datasets has an index that isn't in the other dataset, that row is dropped. You can specify whether you want outer, left, or right joins, as well plenty of other useful options. The pandas API for joining or merging datasets is very developed.

https://pandas.pydata.org/pandas-docs/stable/merging.html

Let's create a column for the number of coffees consumed per person.


In [ ]:
# New column is the ratio of coffees made on a
# given day to number of members in the department
daily_data["coffees_per_person"] = 

# Let's drop those remaining NaNs while we're at it


daily_data.head(n=10)

We can now plot this column.


In [ ]:
# Plot the coffees_per_person column

Those are strange plateaus. We'll pull in another dataset, telling us when the machine was broken.


In [ ]:
# read data/coffee_status.csv
# parse_dates as kwarg; also pass index_col
machine_status = 

machine_status.head()

Note : the parse_dates keyword argument takes several values. By passing in a list of strings, we're telling pandas to attempt to parse the dates in columns with those names.

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html

What values are in the status column ?


In [ ]:
# .value_counts()

A quick trick to plot this as a time-series...


In [ ]:
# Make a pd.Series from the status series where things are OK
numerical_status = 

numerical_status.plot()

Note : the first line here creates a boolean pd.Series, holding the value True when machine_status.status is "OK", and False otherwise. Because it's a pd.Series, its index stays the same as that of machine_status, which was a DatetimeIndex. Then, we can plot the boolean series ( True appearing as 1, and False appearing as 0 ), and just quickly scan to see that there are long areas where the coffee machine was operations, with short bouts ( thankfully ! ) of the machine being broken.

Let's join the datasets on the date field !


In [ ]:
# .join() daily_data with machine_status
daily_data = 

daily_data.head()

We'll bring in this numerical representation of status column into our dataframe too.


In [ ]:
# Column depicting when the status was "OK"
# Cast the series to ints before as you create a new column in the dataframe
daily_data["numerical_status"] = 

daily_data.head()

Let's plot both the coffees per person and the numerical status.


In [ ]:
# Plot both columns on the same graph, using default args

We see a strong weekday-weekend effect. Resampling weekly will fix that.


In [ ]:
# Resample weekly, taking the mean
# of each week to get a weekly value
weekly_data = 

# Plot the coffees per person and the machine's status

What have we achieved ?

Cleaning

  • Cast columns to the correct dtypes
  • Dropped rows with no data
  • Truncated the time-series when things got sparse

Exploring the contributions

  • Discovered who contributed to the dataset and how much
  • Established how contributions varied by day of the week

Exploring the time-series

  • Resampled the uneven time-series to regular daily intervals
  • Interpolated over missing data using our time index
  • Discovered on what days of the week coffee was especially popular
  • Joined the dataset with others to enrich the information available
  • Derived new columns from existing ones
  • Smoothed the time-series by weekly downsampling

Insights

  • A small number of people contributed most of the data
  • Contributions were lacking at the weekends, and Mondays had the most contributions
  • Seminar Wednesdays is a strong driver of coffee consumption
  • Periods of low coffee drinking correlated strongly with periods where the machine was broken
  • A significant dip in consumption occurred in the summer months