Q: How do I make my pandas code faster with parallelism?
A: You don’t need parallelism, you can use Pandas better.
-- Matthew Rocklin
Now that we have been exposed to the basic functionality of pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks.
As most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed.
As you may already have noticed, there are sometimes mutliple ways to achieve the same goal using pandas. Importantly, some approaches are better than others, in terms of performance, readability and ease of use. We will cover some important ways of maximizing your pandas efficiency.
In [ ]:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
This dataset in from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia (spasmodic torticollis) from nine U.S. sites.
In [ ]:
cdystonia = pd.read_csv("../data/cdystonia.csv", index_col=None)
cdystonia.head()
This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements.
The stack
method rotates the data frame so that columns are represented in rows:
In [ ]:
stacked = cdystonia.stack()
stacked
Have a peek at the structure of the index of the stacked data (and the data itself).
To complement this, unstack
pivots from rows back to columns.
In [ ]:
stacked.unstack().head()
In [ ]:
# Write your answer here
If we want to transform this data so that repeated measurements are in columns, we can unstack
the twstrs
measurements according to obs
.
In [ ]:
twstrs_wide = cdystonia2['twstrs'].unstack('obs')
twstrs_wide.head()
We can now merge these reshaped outcomes data with the other variables to create a wide format DataFrame that consists of one row for each patient.
In [ ]:
cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']]
.drop_duplicates()
.merge(twstrs_wide, right_index=True, left_on='patient', how='inner'))
cdystonia_wide.head()
A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:
In [ ]:
(cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs']
.unstack('week').head())
To convert our "wide" format back to long, we can use the melt
function, appropriately parameterized. This function is useful for DataFrame
s where one
or more columns are identifier variables (id_vars
), with the remaining columns being measured variables (value_vars
). The measured variables are "unpivoted" to
the row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments.
In [ ]:
pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'],
var_name='obs', value_name='twsters').head()
This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.
The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.
In [ ]:
(cdystonia[['patient','site','id','treat','age','sex']]
.drop_duplicates()
.merge(twstrs_wide, right_index=True, left_on='patient', how='inner')
.head())
This approach of seqentially calling methods is called method chaining, and despite the fact that it creates very long lines of code that must be properly justified, it allows for the writing of rather concise and readable code. Method chaining is possible because of the pandas convention of returning copies of the results of operations, rather than in-place operations. This allows methods from the returned object to be immediately called, as needed, rather than assigning the output to a variable that might not otherwise be used. For example, without method chaining we would have done the following:
In [ ]:
cdystonia_subset = cdystonia[['patient','site','id','treat','age','sex']]
cdystonia_complete = cdystonia_subset.drop_duplicates()
cdystonia_merged = cdystonia_complete.merge(twstrs_wide, right_index=True, left_on='patient', how='inner')
cdystonia_merged.head()
This necessitates the creation of a slew of intermediate variables that we really don't need.
Let's transform another dataset using method chaining. The measles.csv
file contains de-identified cases of measles from an outbreak in Sao Paulo, Brazil in 1997. The file contains rows of individual records:
In [ ]:
measles = pd.read_csv("../data/measles.csv", index_col=0, encoding='latin-1', parse_dates=['ONSET'])
measles.head()
The goal is to summarize this data by age groups and bi-weekly period, so that we can see how the outbreak affected different ages over the course of the outbreak.
The best approach is to build up the chain incrementally. We can begin by generating the age groups (using cut
) and grouping by age group and the date (ONSET
):
In [ ]:
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))
.groupby(['ONSET', 'AGE_GROUP']))
What we then want is the number of occurences in each combination, which we can obtain by checking the size
of each grouping:
In [ ]:
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))
.groupby(['ONSET', 'AGE_GROUP'])
.size()).head(10)
This results in a hierarchically-indexed Series
, which we can pivot into a DataFrame
by simply unstacking:
In [ ]:
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))
.groupby(['ONSET', 'AGE_GROUP'])
.size()
.unstack()).head(5)
Now, fill replace the missing values with zeros:
In [ ]:
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))
.groupby(['ONSET', 'AGE_GROUP'])
.size()
.unstack()
.fillna(0)).head(5)
Finally, we want the counts in 2-week intervals, rather than as irregularly-reported days, which yields our the table of interest:
In [ ]:
case_counts_2w = (measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False))
.groupby(['ONSET', 'AGE_GROUP'])
.size()
.unstack()
.fillna(0)
.resample('2W')
.sum())
case_counts_2w
From this, it is easy to create meaningful plots and conduct analyses:
In [ ]:
case_counts_2w.plot(cmap='hot')
The pivot
method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: index
, columns
and values
, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.
For example, we may want the twstrs
variable (the response variable) in wide format according to patient, as we saw with the unstacking method above:
In [ ]:
cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()
In [ ]:
# Write your answer here
A related method, pivot_table
, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.
In [ ]:
cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs',
aggfunc=max).head(20)
For a simple cross-tabulation of group frequencies, the crosstab
function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.
In [ ]:
pd.crosstab(cdystonia.sex, cdystonia.site)
In [ ]:
vessels = pd.read_csv('../data/AIS/vessel_information.csv')
vessels.tail(10)
In [ ]:
vessels.duplicated(subset='names').tail(10)
These rows can be removed using drop_duplicates
In [ ]:
vessels.drop_duplicates(['names']).tail(10)
In [ ]:
cdystonia.treat.value_counts()
A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map
method to implement the changes.
In [ ]:
treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}
In [ ]:
cdystonia['treatment'] = cdystonia.treat.map(treatment_map)
cdystonia.treatment
Alternately, if we simply want to replace particular values in a Series
or DataFrame
, we can use the replace
method.
An example where replacement is useful is replacing sentinel values with an appropriate numeric value prior to analysis. A large negative number is sometimes used in otherwise positive-valued data to denote missing values.
In [ ]:
scores = pd.Series([99, 76, 85, -999, 84, 95])
In such situations, we can use replace
to substitute nan
where the sentinel values occur.
In [ ]:
scores.replace(-999, np.nan)
We can also perform the same replacement that we used map
for with replace
:
In [ ]:
cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})
For some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies
(indicator variables are also known as dummy variables) makes this transformation straightforward.
Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type
variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships.
Create a subset of the vessels
DataFrame called vessels5
that only contains the 5 most common types of vessels, based on their prevalence in the dataset.
In [ ]:
# Write your answer here
We can now apply get_dummies
to the vessel type to create 5 indicator variables.
In [ ]:
pd.get_dummies(vessels5.type).head(10)
Pandas' cut
function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly!
Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups:
In [ ]:
cdystonia.age.describe()
Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's:
In [ ]:
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]
The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right
flag to False
:
In [ ]:
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]
Since the data are now ordinal, rather than numeric, we can give them labels:
In [ ]:
pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','really old'])[:30]
A related function qcut
uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default:
In [ ]:
pd.qcut(cdystonia.age, 4)[:30]
Alternatively, one can specify custom quantiles to act as cut points:
In [ ]:
quantiles = pd.qcut(vessels.max_loa, [0, 0.01, 0.05, 0.95, 0.99, 1])
quantiles[:30]
In [ ]:
# Write your answer here
One of the keys to maximizing performance in pandas is to use the appropriate types for your data wherever possible. In the case of categorical data--either the ordered categories as we have just created, or unordered categories like race, gender or country--the use of the categorical
to encode string variables as numeric quantities can dramatically improve performance and simplify subsequent analyses.
When text data are imported into a DataFrame
, they are endowed with an object
dtype. This will result in relatively slow computation because this dtype runs at Python speeds, rather than as Cython code that gives much of pandas its speed. We can ameliorate this by employing the categorical
dtype on such data.
In [ ]:
cdystonia_cat = cdystonia.assign(treatment=cdystonia.treat.astype('category')).drop('treat', axis=1)
cdystonia_cat.dtypes
In [ ]:
cdystonia_cat.treatment.head()
In [ ]:
cdystonia_cat.treatment.cat.codes
This creates an unordered categorical variable. To create an ordinal variable, we can specify order=True
as an argument to astype
:
In [ ]:
cdystonia.treat.astype('category', ordered=True).head()
However, this is not the correct order; by default, the categories will be sorted alphabetically, which here gives exactly the reverse order that we need.
To specify an arbitrary order, we can used the set_categories
method, as follows:
In [ ]:
cdystonia.treat.astype('category').cat.set_categories(['Placebo', '5000U', '10000U'], ordered=True).head()
Notice that we obtained set_categories
from the cat
attribute of the categorical variable. This is known as the category accessor, and is a device for gaining access to Categorical
variables' categories, analogous to the string accessor that we have seen previously from text variables.
In [ ]:
cdystonia_cat.treatment.cat
Additional categoried can be added, even if they do not currently exist in the DataFrame
, but are part of the set of possible categories:
In [ ]:
cdystonia_cat['treatment'] = (cdystonia.treat.astype('category').cat
.set_categories(['Placebo', '5000U', '10000U', '20000U'], ordered=True))
To complement this, we can remove categories that we do not wish to retain:
In [ ]:
cdystonia_cat.treatment.cat.remove_categories('20000U').head()
Or, even more simply:
In [ ]:
cdystonia_cat.treatment.cat.remove_unused_categories().head()
For larger datasets, there is an appreciable gain in performance, both in terms of speed and memory usage.
In [ ]:
vessels_merged = (pd.read_csv('../data/AIS/vessel_information.csv', index_col=0)
.merge(pd.read_csv('../data/AIS/transit_segments.csv'), left_index=True, right_on='mmsi'))
In [ ]:
vessels_merged['registered'] = vessels_merged.flag.astype('category')
In [ ]:
%timeit vessels_merged.groupby('flag').avg_sog.mean().sort_values()
In [ ]:
%timeit vessels_merged.groupby('registered').avg_sog.mean().sort_values()
In [ ]:
vessels_merged[['flag','registered']].memory_usage()
One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple:
In [ ]:
cdystonia_grouped = cdystonia.groupby(cdystonia.patient)
This grouped dataset is hard to visualize
In [ ]:
cdystonia_grouped
However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups:
In [ ]:
for patient, group in cdystonia_grouped:
print('patient', patient)
print('group', group)
A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.
For example, we may want to aggregate our data with with some function.
We can aggregate in Pandas using the aggregate
(or agg
, for short) method:
In [ ]:
cdystonia_grouped.agg(np.mean).head()
Notice that the treat
and sex
variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.
Some aggregation functions are so common that Pandas has a convenience method for them, such as mean
:
In [ ]:
cdystonia_grouped.mean().head()
The add_prefix
and add_suffix
methods can be used to give the columns of the resulting table labels that reflect the transformation:
In [ ]:
cdystonia_grouped.mean().add_suffix('_mean').head()
In [ ]:
# Write your answer here
If we wish, we can easily aggregate according to multiple keys:
In [ ]:
cdystonia.groupby(['week','site']).mean().head()
Alternately, we can transform the data, using a function of our choice with the transform
method:
In [ ]:
normalize = lambda x: (x - x.mean())/x.std()
cdystonia_grouped.transform(normalize).head()
It is easy to do column selection within groupby
operations, if we are only interested split-apply-combine operations on a subset of columns:
In [ ]:
%timeit cdystonia_grouped['twstrs'].mean().head()
Or, as a DataFrame:
In [ ]:
cdystonia_grouped[['twstrs']].mean().head()
If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:
In [ ]:
chunks = dict(list(cdystonia_grouped))
In [ ]:
chunks[4]
By default, groupby
groups by row, but we can specify the axis
argument to change this. For example, we can group our columns by dtype
this way:
In [ ]:
dict(list(cdystonia.groupby(cdystonia.dtypes, axis=1)))
Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2
, which we created with a hierarchical index:
In [ ]:
cdystonia2.head(10)
The level
argument specifies which level of the index to use for grouping.
In [ ]:
cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()
The function below takes a DataFrame and a column name, sorts by the column, and takes the n
largest values of that column. We can use this with apply
to return the largest values from every group in a DataFrame in a single call.
In [ ]:
def top(df, column, n=5):
return df.sort_index(by=column, ascending=False)[:n]
To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged
). Say we wanted to return the 3 longest segments travelled by each ship:
In [ ]:
goo = vessels_merged.groupby('mmsi')
In [ ]:
top3segments = vessels_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]
top3segments.head(15)
Notice that additional arguments for the applied function can be passed via apply
after the function name. It assumes that the DataFrame is the first argument.
In [ ]:
from IPython.core.display import HTML
HTML(filename='../data/titanic.html')
Women and children first?
groupby
method to calculate the proportion of passengers that survived by sex.
In [ ]:
# Write your answer here
Python for Data Analysis Wes McKinney