Notebooks : https://github.com/QCaudron/pydata_pandas
You can ignore this next cell, it's only for presentations !
In [ ]:
%%HTML
<style>
.rendered_html {
font-size: 0.7em;
}
.CodeMirror-scroll {
font-size: 1.2em;
}
.rendered_html table, .rendered_html th, .rendered_html tr, .rendered_html td, .rendered_html h2, .rendered_html h4 {
font-size: 100%;
}
</style>
In [1]:
import pandas as pd
Note : This cell should run without raising a traceback. Assuming it runs, you can also try printing the value of pd.__version__
to see what version of pandas
you have installed.
We'll be analysing a real-world dataset together. It's about my favourite thing in the world : coffee. This dataset was collected at the Mathematics Institute at the University of Warwick. It's a time-series dataset, describing the total number of coffees made by our espresso machine by a certain date.
Slides are available at https://github.com/QCaudron/pydata_pandas. One notebook contains solutions; beware of spoilers.
The notebooks contain notes about what we're doing that I'll skip during this workshop, but try to explain on the way.
The pandas
API is enormous. The documentation is excellent, don't hesitate to look things up.
In [2]:
import pandas as pd
%matplotlib inline
Note : The second line here tells matplotlib
to plot directly under the cell where any plotting code is called. pandas
uses matplotlib
to generate graphs, and without this, the graphs would appear outside the Jupyter notebook when you called plt.show()
- but we just want them to appear without having to do this.
http://ipython.readthedocs.io/en/stable/interactive/plotting.html#id1
Let's import the coffee data from CSV.
In [ ]:
# Read data from data/coffees.csv
data =
Note : pandas
can read from many data formats : CSV, JSON, Excel, HDF5, SQL, and more.
In [ ]:
In [ ]:
# .head()
We have an index, and three columns : timestamp
, coffees
, and contributor
.
Uh-oh. Why is there a string of text, testing
, in our coffee numbers ? What's going on in the coffees
column in the row after that ?
Note : df.head(n=10)
would show the first ten rows. The default is n=5
.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html
In [ ]:
# .loc or .iloc
Definitely a string. We'll note this as something to fix after we finish looking around.
Note : .loc
uses a label-based lookup, which means that the value you pass into the square brackets must be in the index. Another method, .iloc
, is integer-location-based, so .iloc[2]
would return the third row. In this case, they're the same, but had we changed our index, as we'll see later, things would work differently.
Indexing a dataframe with []
directly returns a pd.Series
or pd.DataFrame
by searching over columns, not rows. Indexing a pd.Series
with []
is like indexing a dataframe with .iloc
.
In [ ]:
# [] indexing on a series
Note : here, we're indexing a series ( a pd.Series
object ). From a pd.DataFrame
( here, data
), when you access a single column ( data.coffees
or data["coffees"]
), the object returned is a pd.Series
. From that, indexing directly with []
works in an integer-location-based manner, and like with numpy arrays, you can take slices ( [:5]
).
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html
In [ ]:
print("Dataset length :")
# len()
print()
In [ ]:
# .describe()
Looks like we also have some missing data - we have 671 rows, but the coffees
column only has 658 entries.
Note : .describe()
returns different things based on what's in the dataframe, as we'll see later. For numerical columns, it will return things like the mean, standard deviation, and percentiles. For object columns ( strings or datetimes ), it will return the most frequent entry and the first and last items. For all columns, .describe()
will return the count of objects in that column ( not counting NaNs ) and the unique number of entries. You can determine what's returned using .describe()
's keyword arguments.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html
In [ ]:
# .isnull() and boolean indexing with []
Note : .isnull()
returns a boolean array ( an array of True
s and False
s ), that you can then use to index the dataframe directly. Here, our boolean array tells us which entries in the coffees
column are null, and we use that to index against the full dataframe - so we get back every column in the dataframe, but only those rows where coffees
is null.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html
In [ ]:
# .dtypes
The contributor
column makes sense as object
, because we expect strings there; but surely the timestamp
should be a timestamp-type, and coffees
should be numerical ?
In [ ]:
# print the first element of the series with [] indexing
print()
# print its type()
print()
It looks like the timestamp
field was read from CSV as a string. That makes sense - CSV files are very basic. We'll have pandas
interpret these strings as datetimes for us automatically.
Note : here's an example of using direct []
indexing on a pd.Series
. We're accessing the first entry, just to see what type of object we have there.
In [ ]:
# cast the coffees column using pd.to_numeric, and coerce errors
data.coffees =
data.head()
In [ ]:
# Use .dropna() using a subset, and pass inplace
data.head()
In [ ]:
# Cast to int using .astype()
data.coffees =
data.head()
In [ ]:
# pd.to_datetime()
data.timestamp =
# Confirm dtypes
data.dtypes
In [ ]:
# .describe(), passing the include kwarg to see all information
In [ ]:
# What do the first few rows look like ?
Note : .describe(include="all")
is describing all attributes of all columns, but some don't make sense based on the column's dtype
. For example, the contributor column has no first
and last
attributes, because those describe the first and last entries in an ordered series. That makes sense for the timestamp - those have an intuitive definition of sorting - but not so much for strings ( alphabetical order doesn't really matter when they're arbitrary strings ). Similary, the timestamp column has no mean or other numerical traits. What does it mean to calculate the mean timestamp ?
In [ ]:
# .plot() on the coffees series
pandas
is plotting the coffees against the index, which is just a series of integers.
Note : .plot()
on a pd.Series
will plot the data against the index. On a pd.DataFrame
, the .plot()
method allows plotting of one column against another.
By default, .plot()
renders a line graph, but you can specify which type of plot you'd like - bar, line, histogram, area, scatter, etc..
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.html
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html
In [ ]:
# .plot() on the dataframe,
# pass x kwarg to plot against the timestamp
# use a dot-dash style
We have some very uneven spacing in places. We might start by cutting off the last few points of this time-series, which is missing a lot of data.
In [ ]:
# .tail() with ten rows
After mid-March, things start getting spaced rather erratically.
In [ ]:
# Use conditional indexing against the timestamp
data =
data.tail()
Note : this is another example of boolean indexing. data.timestamp < "2013-03-01"
is a boolean array, and can be passed into the dataframe immediately in []
, much like with a np.ndarray
.
In [ ]:
# Once again, plot the data against the timestamp
In [ ]:
# .value_counts()
Note : .value_counts()
counts the unique values in a series. It's similar to doing a .groupby()
followed by a .count()
, as we'll see soon.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html
In [ ]:
# .plot() a bar chart of the value counts
In [ ]:
# Create a series of the weekdays
# for each entry using .dt.weekday
weekdays =
# .assign() it to our dataframe
data =
data.head()
In [ ]:
weekday_names = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
weekday_dict = {key: weekday_names[key] for key in range(7)}
def day_of_week(idx):
return weekday_dict[idx]
# Use .apply() to apply a custom function to the weekdays column
data.weekdays =
data.head()
In [ ]:
# .groupby() the weekdays and then .count() rows in each group
weekday_counts =
# We can reorder this dataframe by our weekday_names
# list using .loc, indexing with the names
weekday_counts =
weekday_counts
Note : this first line could be replaced by weekday_counts = data.weekdays.value_counts()
, with the only difference being that that would return a series to us, and here, we got back a dataframe.
In [ ]:
# Plot a bar chart of the coffees data in weekday_counts
# Title : "Datapoints added on each weekday"
In [ ]:
# Set the dataframe's .index property
data.index =
# Let's drop the timestamp column, as we no longer need it
data.head()
In [ ]:
# pd.date_range, with daily frequency, and normalisation
midnights =
midnights
Note : pd.date_range
creates a fixed-frequency DatetimeIndex. normalize=True
ensures these datetimes are at midnight, and not at whatever time the starting point is.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html
In [ ]:
# Take the union of the existing and new indices
new_index =
new_index
Note : the union of these indices is just a new index where entries from both indices are present. It's sorted by time.
In [ ]:
# .reindex() the dataframe to get an upsampled dataframe
upsampled_data =
upsampled_data.head(10)
Note : .reindex()
keeps any values that conform to the new index, and inserts NaN
s where we have no values.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html
In [ ]:
# .interpolate the upsampled_data using the time method
upsampled_data =
upsampled_data.head(10)
In [ ]:
# .resample() the upsampled dataframe,
# using .asfreq() to get only exactly daily values
daily_data =
# Drop the contributor column, we no longer need it
daily_data =
# Generate a column of weekday_names
daily_data["weekdays"] =
daily_data.head()
In [ ]:
# Let's plot the data once more, to see how we're doing
In [ ]:
# Use .diff() on the coffees column; follow up with .shift()
coffees_made =
# Add this as a column to the dataframe
daily_data["coffees_made_today"] =
daily_data.head(n=10)
Note : we use .shift()
here because if we look at the .diff()
between a Monday and a Tuesday, those coffees are attributed to the Tuesday. However, what we want to say is "this many coffees were made at some point on the Monday", so we shift the entire series up one.
In [ ]:
# .groupby weekdays, take the mean, and
# grab the coffees_made_today column
coffees_by_day =
coffees_by_day
In [ ]:
# Sort coffees_by_day by our list of weekend names
coffees_by_day =
# Plot a bar chart
Wednesdays was seminar day...
In [ ]:
# Bring in data/department_members.csv;
# have the first column be the index, and parse the dates
people =
people.head()
In [ ]:
# Use an outer join, then interpolate over
# missing values using nearest values
daily_data =
daily_data.head(n=15)
Note : by default, inner joins are performed. That is, if a row from one of the datasets has an index that isn't in the other dataset, that row is dropped. You can specify whether you want outer, left, or right joins, as well plenty of other useful options. The pandas
API for joining or merging datasets is very developed.
In [ ]:
# New column is the ratio of coffees made on a
# given day to number of members in the department
daily_data["coffees_per_person"] =
# Let's drop those remaining NaNs while we're at it
daily_data.head(n=10)
In [ ]:
# Plot the coffees_per_person column
In [ ]:
# read data/coffee_status.csv
# parse_dates as kwarg; also pass index_col
machine_status =
machine_status.head()
Note : the parse_dates
keyword argument takes several values. By passing in a list of strings, we're telling pandas
to attempt to parse the dates in columns with those names.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
In [ ]:
# .value_counts()
In [ ]:
# Make a pd.Series from the status series where things are OK
numerical_status =
numerical_status.plot()
Note : the first line here creates a boolean pd.Series
, holding the value True
when machine_status.status
is "OK"
, and False
otherwise. Because it's a pd.Series
, its index stays the same as that of machine_status
, which was a DatetimeIndex
. Then, we can plot the boolean series ( True
appearing as 1
, and False
appearing as 0
), and just quickly scan to see that there are long areas where the coffee machine was operations, with short bouts ( thankfully ! ) of the machine being broken.
In [ ]:
# .join() daily_data with machine_status
daily_data =
daily_data.head()
In [ ]:
# Column depicting when the status was "OK"
# Cast the series to ints before as you create a new column in the dataframe
daily_data["numerical_status"] =
daily_data.head()
In [ ]:
# Plot both columns on the same graph, using default args
In [ ]:
# Resample weekly, taking the mean
# of each week to get a weekly value
weekly_data =
# Plot the coffees per person and the machine's status