Python is a terrific platform for statistical data analysis partly because of the features of the language itself, but also because of a rich suite of 3rd party packages that provide robust and flexible data structures, efficient implementations of mathematical and statistical functions, and facitities for generating publication-quality graphics. Pandas is at the top of the "scientific stack", because it allows data to be imported, manipulated and exported so easily. In contrast, NumPy supports the bottom of the stack with fundamental infrastructure for array operations, mathematical calculations, and random number generation.
We will cover both of these in some detail before getting down to the business of analyzing data.
In [1]:
%matplotlib inline
import pandas as pd
import numpy as np
# Set some Pandas options
pd.set_option('html', False)
pd.set_option('max_columns', 30)
pd.set_option('max_rows', 20)
The most fundamental third-party package for scientific computing in Python is NumPy, which provides multidimensional array data types, along with associated functions and methods to manipulate them. While Python comes with several container types (list
,tuple
,dict
), NumPy's arrays are implemented closer to the hardware, and are therefore more efficient than the built-in types.
The main object provided by numpy is a powerful array. We'll start by exploring how the numpy array differs from Python lists. We start by creating a simple list and an array with the same contents of the list:
In [2]:
a_list = range(1000)
an_array = np.arange(1000)
This is what the array looks like:
In [3]:
an_array[:10]
Out[3]:
In [4]:
type(an_array)
Out[4]:
In [5]:
timeit [i**2 for i in a_list]
In [6]:
timeit an_array**2
Elements of a one-dimensional array are indexed with square brackets, as with lists:
In [7]:
an_array[5:10]
Out[7]:
The first difference to note between lists and arrays is that arrays are homogeneous; i.e. all elements of an array must be of the same type. In contrast, lists can contain elements of arbitrary type. For example, we can change the last element in our list above to be a string:
In [8]:
a_list[0] = 'a string inside a list'
a_list[:10]
Out[8]:
In [9]:
an_array[0] = 'a string inside an array'
The information about the type of an array is contained in its dtype attribute:
In [ ]:
an_array.dtype
Once an array has been created, its dtype is fixed and it can only store elements of the same type. For this example where the dtype is integer, if we store a floating point number it will be automatically converted into an integer:
In [10]:
an_array[0] = 1.234
an_array[:10]
Out[10]:
The linspace
and logspace
functions to create linearly and logarithmically-spaced grids respectively, with a fixed number of points and including both ends of the specified interval:
In [11]:
np.linspace(0, 1, num=5)
Out[11]:
In [12]:
np.logspace(1, 4, num=4)
Out[12]:
It is often useful to create arrays with random numbers that follow a specific distribution. The np.random
module contains a number of functions that can be used to this effect, for example this will produce an array of 5 random samples taken from a standard normal distribution (0 mean and variance 1):
In [13]:
np.random.randn(5)
Out[13]:
whereas the following will also give 5 samples, but from a normal distribution with a mean of 10 and a variance of 3:
In [14]:
norm_10 = np.random.normal(loc=10, scale=3, size=10)
norm_10
Out[14]:
Above we saw how to index arrays with single numbers and slices, just like Python lists. But arrays allow for a more sophisticated kind of indexing which is very powerful: you can index an array with another array, and in particular with an array of boolean values. This is particluarly useful to extract information from an array that matches a certain condition.
Consider for example that in the array norm10
we want to replace all values above 9 with the value 0. We can do so by first finding the mask that indicates where this condition is True
or False
:
In [15]:
mask = norm_10 > 9
mask
Out[15]:
Now that we have this mask, we can use it to either read those values or to reset them to 0:
In [16]:
norm_10[mask]
Out[16]:
In [17]:
norm_10[mask] = 0
print norm_10
In [18]:
norm_10[np.nonzero(norm_10)]
Out[18]:
Numpy can create arrays of aribtrary dimensions, and all the methods illustrated in the previous section work with more than one dimension. For example, a list of lists can be used to initialize a two dimensional array:
In [19]:
array_2d = np.array([[1, 2], [3, 4]])
array_2d.shape
Out[19]:
With two-dimensional arrays we start seeing the power of numpy: while a nested list can be indexed using repeatedly the [ ]
operator, multidimensional arrays support a much more natural indexing syntax with a single [ ]
and a set of indices separated by commas:
In [20]:
array_2d[0,1]
Out[20]:
The shape of an array can be changed at any time, as long as the total number of elements is unchanged. For example, if we want a 2x4 array with numbers increasing from 0, the easiest way to create it is via the numpy array's reshape
method.
In [21]:
md_array = np.arange(8).reshape(2,4)
print md_array
With multidimensional arrays, you can also use slices, and you can mix and match slices and single indices in the different dimensions (using the same array as above):
In [22]:
md_array[1, 2:4]
Out[22]:
In [23]:
md_array[:, 2]
Out[23]:
If you only provide one index, then you will get the corresponding row.
In [24]:
md_array[1]
Out[24]:
Arrays have a slew of useful attributes and methods:
In [25]:
md_array.dtype
Out[25]:
In [26]:
md_array.shape
Out[26]:
In [27]:
md_array.ndim
Out[27]:
In [28]:
md_array.nbytes
Out[28]:
In [29]:
md_array.min(), md_array.max()
Out[29]:
In [30]:
md_array.sum(), md_array.prod()
Out[30]:
In [31]:
md_array.mean(), md_array.std()
Out[31]:
Arrays may be summarized along specified axes:
In [32]:
md_array.sum(axis=0)
Out[32]:
In [33]:
md_array.sum(axis=1)
Out[33]:
Or, more generally:
In [34]:
random_array = np.random.random((3,2,3,4))
random_array
Out[34]:
In [35]:
random_array.sum(2).shape
Out[35]:
NumPy arrays support all standard arithmetic operations, which are typically applied element-wise.
In [36]:
first_array = np.random.randn(4)
second_array = np.random.randn(4)
first_array, second_array
Out[36]:
In [37]:
first_array * second_array
Out[37]:
When operating on scalars (zero-dimensional objects), broadcasting is used to apply the operation to each element:
In [38]:
first_array * 5
Out[38]:
Broadcasting also works for multidimensional arrays:
In [39]:
md_array
Out[39]:
In [40]:
md_array * first_array
Out[40]:
In the above, NumPy compares the trailing dimensions of each array, and adds dimsnsions of length 1 for the remaining dimensions, before multiplying. Hence, the following will not work:
In [41]:
md_array * np.array([-1, 2.3])
This can be made to work either by "injecting" an additional axis, or by transposing the first array:
In [42]:
md_array * np.array([-1, 2.3])[:, np.newaxis]
Out[42]:
In [43]:
md_array.T * np.array([-1, 2.3])
Out[43]:
Some may have predicted the multiply operator to perform matrix multiplication on two array arguments, rather than element-wise multiplication. NumPy includes a linear algebra library, and matrix mutliplication can be carried out using the dot
(i.e. dot product) function or method:
In [44]:
md_array.dot(first_array)
Out[44]:
In [45]:
np.dot(md_array, first_array)
Out[45]:
pandas is a Python package providing fast, flexible, and expressive data structures designed to work with relational or labeled data both. It is a fundamental high-level building block for doing practical, real world data analysis in Python.
pandas is well suited for:
Key features:
In [46]:
from IPython.core.display import HTML
HTML("<iframe src=http://pandas.pydata.org width=800 height=350></iframe>")
Out[46]:
In [47]:
counts = pd.Series([632, 1638, 569, 115])
counts
Out[47]:
If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the Series
, while the index is a pandas Index
object.
In [48]:
counts.values
Out[48]:
In [49]:
counts.index
Out[49]:
We can assign meaningful labels to the index, if they are available:
In [50]:
bacteria = pd.Series([632, 1638, 569, 115],
index=['Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes'])
bacteria
Out[50]:
These labels can be used to refer to the values in the Series
.
In [51]:
bacteria['Actinobacteria']
Out[51]:
In [52]:
bacteria[[name.endswith('bacteria') for name in bacteria.index]]
Out[52]:
In [53]:
[name.endswith('bacteria') for name in bacteria.index]
Out[53]:
Notice that the indexing operation preserved the association between the values and the corresponding indices.
We can still use positional indexing if we wish.
In [54]:
bacteria[0]
Out[54]:
We can give both the array of values and the index meaningful labels themselves:
In [55]:
bacteria.name = 'counts'
bacteria.index.name = 'phylum'
bacteria
Out[55]:
NumPy's math functions and other operations can be applied to Series without losing the data structure.
In [56]:
np.log(bacteria)
Out[56]:
We can also filter according to the values in the Series
:
In [57]:
bacteria[bacteria>1000]
Out[57]:
A Series
can be thought of as an ordered key-value store. In fact, we can create one from a dict
:
In [58]:
bacteria_dict = {'Firmicutes': 632, 'Proteobacteria': 1638, 'Actinobacteria': 569, 'Bacteroidetes': 115}
pd.Series(bacteria_dict)
Out[58]:
Notice that the Series
is created in key-sorted order.
If we pass a custom index to Series
, it will select the corresponding values from the dict, and treat indices without corrsponding values as missing. Pandas uses the NaN
(not a number) type for missing values.
In [59]:
bacteria2 = pd.Series(bacteria_dict, index=['Cyanobacteria','Firmicutes','Proteobacteria','Actinobacteria'])
bacteria2
Out[59]:
In [60]:
bacteria2.isnull()
Out[60]:
Critically, the labels are used to align data when used in operations with other Series objects:
In [61]:
bacteria + bacteria2
Out[61]:
Contrast this with NumPy arrays, where arrays of the same length will combine values element-wise; adding Series combined values with the same label in the resulting series. Notice also that the missing values were propogated by addition.
Inevitably, we want to be able to store, view and manipulate data that is multivariate, where for every index there are multiple fields or columns of data, often of varying data type.
A DataFrame
is a tabular data structure, encapsulating multiple series like columns in a spreadsheet. Data are stored internally as a 2-dimensional object, but the DataFrame
allows us to represent and manipulate higher-dimensional data.
In [62]:
data = pd.DataFrame({'value':[632, 1638, 569, 115, 433, 1130, 754, 555],
'patient':[1, 1, 1, 1, 2, 2, 2, 2],
'phylum':['Firmicutes', 'Proteobacteria', 'Actinobacteria',
'Bacteroidetes', 'Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes']})
data
Out[62]:
Notice the DataFrame
is sorted by column name. We can change the order by indexing them in the order we desire:
In [63]:
data[['phylum','value','patient']]
Out[63]:
A DataFrame
has a second index, representing the columns:
In [64]:
data.columns
Out[64]:
If we wish to access columns, we can do so either by dict-like indexing or by attribute:
In [65]:
data['value']
Out[65]:
In [66]:
data.value
Out[66]:
In [67]:
type(data.value)
Out[67]:
In [68]:
type(data[['value']])
Out[68]:
Notice this is different than with Series
, where dict-like indexing retrieved a particular element (row). If we want access to a row in a DataFrame
, we index its ix
attribute.
In [69]:
data.ix[3]
Out[69]:
Its important to note that the Series returned when a DataFrame is indexted is merely a view on the DataFrame, and not a copy of the data itself. So you must be cautious when manipulating this data:
In [70]:
vals = data.value
vals
Out[70]:
In [71]:
vals[5] = 0
vals
Out[71]:
In [72]:
data
Out[72]:
In [73]:
vals = data.value.copy()
vals[5] = 1000
data
Out[73]:
We can create or modify columns by assignment:
In [74]:
data.value[3] = 14
data
Out[74]:
In [75]:
data['year'] = 2013
data
Out[75]:
But note, we cannot use the attribute indexing method to add a new column:
In [76]:
data.treatment = 1
data
Out[76]:
In [77]:
data.treatment
Out[77]:
Specifying a Series
as a new columns cause its values to be added according to the DataFrame
's index:
In [78]:
treatment = pd.Series([0]*4 + [1]*2)
treatment
Out[78]:
In [79]:
data['treatment'] = treatment
data
Out[79]:
Other Python data structures (ones without an index) need to be the same length as the DataFrame
:
In [80]:
month = ['Jan', 'Feb', 'Mar', 'Apr']
data['month'] = month
We can extract the underlying data as a simple ndarray
by accessing the values
attribute:
In [81]:
data.values
Out[81]:
Notice that because of the mix of string and integer (and NaN
) values, the dtype of the array is object
. The dtype will automatically be chosen to be as general as needed to accomodate all the columns.
Pandas uses a custom data structure to represent the indices of Series and DataFrames.
In [82]:
data.index
Out[82]:
Index objects are immutable:
In [83]:
data.index[0] = 15
This is so that Index objects can be shared between data structures without fear that they will be changed.
In [84]:
bacteria2.index = bacteria.index
In [85]:
bacteria2
Out[85]:
A key, but often under-appreciated, step in data analysis is importing the data that we wish to analyze. Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this input into a robust data structure:
genes = np.loadtxt("genes.csv", delimiter=",", dtype=[('gene', '|S10'), ('value', '<f4')])
Pandas provides a convenient set of functions for importing tabular data in a number of formats directly into a DataFrame
object. These functions include a slew of options to perform type inference, indexing, parsing, iterating and cleaning automatically as data are imported.
Let's start with some more bacteria data, stored in csv format.
In [86]:
!cat data/microbiome.csv
This table can be read into a DataFrame using read_csv
:
In [87]:
mb = pd.read_csv("data/microbiome.csv")
mb
Out[87]:
Notice that read_csv
automatically considered the first row in the file to be a header row.
We can override default behavior by customizing some the arguments, like header
, names
or index_col
.
In [88]:
pd.read_csv("data/microbiome.csv", header=None).head()
Out[88]:
read_csv
is just a convenience function for read_table
, since csv is such a common format:
In [89]:
mb = pd.read_table("data/microbiome.csv", sep=',')
The sep
argument can be customized as needed to accomodate arbitrary separators. For example, we can use a regular expression to define a variable amount of whitespace, which is unfortunately very common in some data formats:
sep='\s+'
For a more useful index, we can specify the first two columns, which together provide a unique index to the data.
In [90]:
mb = pd.read_csv("data/microbiome.csv", index_col=['Taxon','Patient'])
mb.head()
Out[90]:
This is called a hierarchical index, which we will revisit later in the tutorial.
If we have sections of data that we do not wish to import (for example, known bad data), we can populate the skiprows
argument:
In [91]:
pd.read_csv("data/microbiome.csv", skiprows=[3,4,6]).head()
Out[91]:
Conversely, if we only want to import a small number of rows from, say, a very large data file we can use nrows
:
In [92]:
pd.read_csv("data/microbiome.csv", nrows=4)
Out[92]:
Alternately, if we want to process our data in reasonable chunks, the chunksize
argument will return an iterable object that can be employed in a data processing loop. For example, our microbiome data are organized by bacterial phylum, with 15 patients represented in each:
In [93]:
data_chunks = pd.read_csv("data/microbiome.csv", chunksize=15)
mean_tissue = {chunk.Taxon[0]:chunk.Tissue.mean() for chunk in data_chunks}
mean_tissue
Out[93]:
Most real-world data is incomplete, with values missing due to incomplete observation, data entry or transcription error, or other reasons. Pandas will automatically recognize and parse common missing data indicators, including NA
and NULL
.
In [94]:
!cat data/microbiome_missing.csv
In [95]:
pd.read_csv("data/microbiome_missing.csv").head(20)
Out[95]:
Above, Pandas recognized NA
and an empty field as missing data.
In [96]:
pd.isnull(pd.read_csv("data/microbiome_missing.csv")).head(20)
Out[96]:
Unfortunately, there will sometimes be inconsistency with the conventions for missing data. In this example, there is a question mark "?" and a large negative number where there should have been a positive integer. We can specify additional symbols with the na_values
argument:
In [97]:
pd.read_csv("data/microbiome_missing.csv", na_values=['?', -99999]).head(20)
Out[97]:
These can be specified on a column-wise basis using an appropriate dict as the argument for na_values
.
There are several other data formats that can be imported into Python and converted into DataFrames, with the help of buitl-in or third-party libraries. These include Excel, JSON, XML, HDF5, relational and non-relational databases, and various web APIs. These are beyond the scope of this tutorial, but are covered in Python for Data Analysis.
This section introduces the new user to the key functionality of Pandas that is required to use the software effectively.
For some variety, we will leave our digestive tract bacteria behind and employ some baseball data.
In [98]:
baseball = pd.read_csv("data/baseball.csv", index_col='id')
baseball.head()
Out[98]:
Notice that we specified the id
column as the index, since it appears to be a unique identifier. We could try to create a unique index ourselves by combining player
and year
:
In [99]:
player_id = baseball.player + baseball.year.astype(str)
baseball_newind = baseball.copy()
baseball_newind.index = player_id
baseball_newind.head()
Out[99]:
In [100]:
baseball_newind.index.is_unique
Out[100]:
So, indices need not be unique. Our choice is not unique because some players change teams within years. The most important consequence of a non-unique index is that indexing by label will return multiple values for some labels:
In [101]:
baseball_newind.ix['wickmbo012007']
Out[101]:
We will learn more about indexing below.
In [102]:
baseball.reindex(baseball.index[::-1]).head()
Out[102]:
Notice that the id
index is not sequential. Say we wanted to populate the table with every id
value. We could specify and index that is a sequence from the first to the last id
numbers in the database, and Pandas would fill in the missing data with NaN
values:
In [103]:
id_range = range(baseball.index.values.min(), baseball.index.values.max())
baseball.reindex(id_range).head()
Out[103]:
Missing values can be filled as desired, either with selected values, or by rule:
In [104]:
baseball.reindex(id_range, method='ffill', columns=['player','year']).head()
Out[104]:
In [105]:
baseball.reindex(id_range, fill_value='mr.nobody', columns=['player']).head()
Out[105]:
Keep in mind that reindex
does not work if we pass a non-unique index series.
We can remove rows or columns via the drop
method:
In [106]:
baseball.shape
Out[106]:
In [107]:
baseball.drop([89525, 89526])
Out[107]:
In [108]:
baseball.drop(['ibb','hbp'], axis=1)
Out[108]:
In [109]:
# Sample Series object
hits = baseball_newind.h
hits
Out[109]:
In [110]:
# Numpy-style indexing
hits[:3]
Out[110]:
In [111]:
# Indexing by label
hits[['womacto012006','schilcu012006']]
Out[111]:
We can also slice with data labels, since they have an intrinsic order within the Index:
In [112]:
hits.ix['womacto012006':'gonzalu012006']
Out[112]:
In [113]:
hits['womacto012006':'gonzalu012006'] = 5
hits
Out[113]:
In a DataFrame
we can slice along either or both axes:
In [114]:
baseball_newind[['h','ab']]
Out[114]:
The indexing field ix
allows us to select subsets of rows and columns in an intuitive way:
In [115]:
baseball_newind.ix['gonzalu012006', ['h','X2b', 'X3b', 'hr']]
Out[115]:
In [116]:
baseball_newind.ix[['gonzalu012006','finlest012006'], 5:8]
Out[116]:
In [117]:
baseball_newind.ix[:'myersmi012006', 'hr']
Out[117]:
In [118]:
hr2006 = baseball[baseball.year==2006].xs('hr', axis=1)
hr2006.index = baseball.player[baseball.year==2006]
hr2007 = baseball[baseball.year==2007].xs('hr', axis=1)
hr2007.index = baseball.player[baseball.year==2007]
In [119]:
hr2006 = pd.Series(baseball.hr[baseball.year==2006].values, index=baseball.player[baseball.year==2006])
hr2007 = pd.Series(baseball.hr[baseball.year==2007].values, index=baseball.player[baseball.year==2007])
In [120]:
hr_total = hr2006 + hr2007
hr_total
Out[120]:
Pandas' data alignment places NaN
values for labels that do not overlap in the two Series. In fact, there are only 6 players that occur in both years.
In [121]:
hr_total[hr_total.notnull()]
Out[121]:
While we do want the operation to honor the data labels in this way, we probably do not want the missing values to be filled with NaN
. We can use the add
method to calculate player home run totals by using the fill_value
argument to insert a zero for home runs where labels do not overlap:
In [122]:
hr2007.add(hr2006, fill_value=0)
Out[122]:
Operations can also be broadcast between rows or columns.
For example, if we subtract the maximum number of home runs hit from the hr
column, we get how many fewer than the maximum were hit by each player:
In [123]:
baseball.hr - baseball.hr.max()
Out[123]:
Or, looking at things row-wise, we can see how a particular player compares with the rest of the group with respect to important statistics
In [124]:
baseball.ix[89521]["player"]
Out[124]:
In [125]:
stats = baseball[['h','X2b', 'X3b', 'hr']]
diff = stats - stats.xs(89521)
diff[:10]
Out[125]:
We can also apply functions to each column or row of a DataFrame
In [126]:
stats.apply(np.median)
Out[126]:
In [127]:
stat_range = lambda x: x.max() - x.min()
stats.apply(stat_range)
Out[127]:
Lets use apply to calculate a meaningful baseball statistics, slugging percentage:
$$SLG = \frac{1B + (2 \times 2B) + (3 \times 3B) + (4 \times HR)}{AB}$$And just for fun, we will format the resulting estimate.
In [128]:
slg = lambda x: (x['h']-x['X2b']-x['X3b']-x['hr'] + 2*x['X2b'] + 3*x['X3b'] + 4*x['hr'])/(x['ab']+1e-6)
baseball.apply(slg, axis=1).apply(lambda x: '%.3f' % x)
Out[128]:
In [129]:
baseball_newind.sort_index().head()
Out[129]:
In [130]:
baseball_newind.sort_index(ascending=False).head()
Out[130]:
In [131]:
baseball_newind.sort_index(axis=1).head()
Out[131]:
We can also use order
to sort a Series
by value, rather than by label.
In [132]:
baseball.hr.order(ascending=False)
Out[132]:
For a DataFrame
, we can sort according to the values of one or more columns using the by
argument of sort_index
:
In [133]:
baseball[['player','sb','cs']].sort_index(ascending=[False,True], by=['sb', 'cs']).head(10)
Out[133]:
Ranking does not re-arrange data, but instead returns an index that ranks each value relative to others in the Series.
In [134]:
baseball.hr.rank()
Out[134]:
Ties are assigned the mean value of the tied ranks, which may result in decimal values.
In [135]:
pd.Series([100,100]).rank()
Out[135]:
Alternatively, you can break ties via one of several methods, such as by the order in which they occur in the dataset:
In [136]:
baseball.hr.rank(method='first')
Out[136]:
Calling the DataFrame
's rank
method results in the ranks of all columns:
In [137]:
baseball.rank(ascending=False).head()
Out[137]:
In [138]:
baseball[['r','h','hr']].rank(ascending=False).head()
Out[138]:
The occurence of missing data is so prevalent that it pays to use tools like Pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand.
Missing data are represented in Series
and DataFrame
objects by the NaN
floating point value. However, None
is also treated as missing, since it is commonly used as such in other contexts (e.g. NumPy).
In [139]:
foo = pd.Series([np.nan, -3, None, 'foobar'])
foo
Out[139]:
In [140]:
foo.isnull()
Out[140]:
Missing values may be dropped or indexed out:
In [141]:
bacteria2
Out[141]:
In [142]:
bacteria2.dropna()
Out[142]:
In [143]:
bacteria2[bacteria2.notnull()]
Out[143]:
By default, dropna
drops entire rows in which one or more values are missing.
In [144]:
data
Out[144]:
In [145]:
data.dropna()
Out[145]:
This can be overridden by passing the how='all'
argument, which only drops a row when every field is a missing value.
In [146]:
data.dropna(how='all')
Out[146]:
This can be customized further by specifying how many values need to be present before a row is dropped via the thresh
argument.
In [147]:
data.ix[7, 'year'] = float('Nan')
data
In [148]:
data.dropna(thresh=4)
Out[148]:
This is typically used in time series applications, where there are repeated measurements that are incomplete for some subjects.
If we want to drop missing values column-wise instead of row-wise, we use axis=1
.
In [149]:
data.dropna(axis=1)
Out[149]:
Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero) or a value that is either imputed or carried forward/backward from similar data points. We can do this programmatically in Pandas with the fillna
argument.
In [150]:
bacteria2.fillna(0)
Out[150]:
In [151]:
data.fillna({'year': 2013, 'treatment':2})
Out[151]:
Notice that fillna
by default returns a new object with the desired filling behavior, rather than changing the Series
or DataFrame
in place (in general, we like to do this, by the way!).
In [152]:
data
Out[152]:
We can alter values in-place using inplace=True
.
In [153]:
_ = data.year.fillna(2013, inplace=True)
data
Out[153]:
Missing values can also be interpolated, using any one of a variety of methods:
In [154]:
bacteria2.fillna(method='bfill')
Out[154]:
In [155]:
bacteria2.fillna(bacteria2.mean())
Out[155]:
In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed.
The International Maritime Organization’s (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information.
For our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc.
In [156]:
segments = pd.read_csv("data/AIS/transit_segments.csv")
segments.head()
Out[156]:
In addition to the behavior of each vessel, we may want a little more information regarding the vessels themselves. In the data/AIS
folder there is a second table that contains information about each of the ships that traveled the segments in the segments
table.
In [157]:
vessels = pd.read_csv("data/AIS/vessel_information.csv", index_col='mmsi')
vessels.head()
Out[157]:
In [158]:
vessels.type.value_counts()
Out[158]:
The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments.
In Pandas, we can combine tables according to the value of one or more keys that are used to identify rows, much like an index. Using a trivial example:
In [159]:
df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4)))
df2 = pd.DataFrame(dict(id=range(3)+range(3), score=np.random.random(size=6)))
df1, df2
Out[159]:
In [160]:
pd.merge(df1, df2)
Out[160]:
Notice that without any information about which column to use as a key, Pandas did the right thing and used the id
column in both tables. Unless specified otherwise, merge
will used any common column names as keys for merging the tables.
Notice also that id=3
from df1
was omitted from the merged table. This is because, by default, merge
performs an inner join on the tables, meaning that the merged table represents an intersection of the two tables.
In [161]:
pd.merge(df1, df2, how='outer')
Out[161]:
The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge
), but not necessarily the other.
Looking at the two datasets that we wish to merge:
In [162]:
segments.head(1)
Out[162]:
In [163]:
vessels.head(1)
Out[163]:
we see that there is a mmsi
value (a vessel identifier) in each table, but it is used as an index for the vessels
table. In this case, we have to specify to join on the index for this table, and on the mmsi
column for the other.
In [164]:
segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi')
In [165]:
segments_merged.head()
Out[165]:
In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other.
Notice that mmsi
field that was an index on the vessels
table is no longer an index on the merged table.
Here, we used the merge
function to perform the merge; we could also have used the merge
method for either of the tables:
In [166]:
vessels.merge(segments, left_index=True, right_on='mmsi').head()
Out[166]:
Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x
and _y
to the columns to uniquely identify them.
In [167]:
segments['type'] = 'foo'
pd.merge(vessels, segments, left_index=True, right_on='mmsi').head()
Out[167]:
This behavior can be overridden by specifying a suffixes
argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.
In [168]:
np.concatenate([np.random.random(5), np.random.random(5)])
Out[168]:
In [169]:
np.r_[np.random.random(5), np.random.random(5)]
Out[169]:
In [170]:
np.c_[np.random.random(5), np.random.random(5)]
Out[170]:
This operation is also called binding or stacking.
With Pandas' indexed data structures, there are additional considerations as the overlap in index values between two data structures affects how they are concatenate.
Lets import two microbiome datasets, each consisting of counts of microorganiams from a particular patient. We will use the first column of each dataset as the index.
In [171]:
mb1 = pd.read_excel('data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)
mb2 = pd.read_excel('data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)
mb1.shape, mb2.shape
Out[171]:
In [172]:
mb1.head()
Out[172]:
Let's give the index and columns meaningful labels:
In [173]:
mb1.columns = mb2.columns = ['Count']
In [174]:
mb1.index.name = mb2.index.name = 'Taxon'
In [175]:
mb1.head()
Out[175]:
The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level.
In [176]:
mb1.index[:3]
Out[176]:
In [177]:
mb1.index.is_unique
Out[177]:
If we concatenate along axis=0
(the default), we will obtain another data frame with the the rows concatenated:
In [178]:
pd.concat([mb1, mb2], axis=0).shape
Out[178]:
However, the index is no longer unique, due to overlap between the two DataFrames.
In [179]:
pd.concat([mb1, mb2], axis=0).index.is_unique
Out[179]:
Concatenating along axis=1
will concatenate column-wise, but respecting the indices of the two DataFrames.
In [180]:
pd.concat([mb1, mb2], axis=1).shape
Out[180]:
In [181]:
pd.concat([mb1, mb2], axis=1).head()
Out[181]:
In [182]:
pd.concat([mb1, mb2], axis=1).values[:5]
Out[182]:
If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner
argument.
In [183]:
pd.concat([mb1, mb2], axis=1, join='inner').head()
Out[183]:
If we wanted to use the second table to fill values absent from the first table, we could use combine_first
.
In [184]:
mb1.combine_first(mb2).head()
Out[184]:
Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict.
In [185]:
pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head()
Out[185]:
If you want concat
to work like numpy.concatanate
, you may provide the ignore_index=True
argument.
In the context of a single DataFrame, we are often interested in re-arranging the layout of our data.
This dataset in from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.
In [186]:
cdystonia = pd.read_csv("data/cdystonia.csv", index_col=None)
cdystonia.head()
Out[186]:
This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing mutliple measurements.
The stack
method rotates the data frame so that columns are represented in rows:
In [187]:
stacked = cdystonia.stack()
stacked
Out[187]:
To complement this, unstack
pivots from rows back to columns.
In [188]:
stacked.unstack().head()
Out[188]:
For this dataset, it makes sense to create a hierarchical index based on the patient and observation:
In [189]:
cdystonia2 = cdystonia.set_index(['patient','obs'])
cdystonia2.head()
Out[189]:
In [190]:
cdystonia2.index.is_unique
Out[190]:
If we want to transform this data so that repeated measurements are in columns, we can unstack
the twstrs
measurements according to obs
.
In [191]:
twstrs_wide = cdystonia2['twstrs'].unstack('obs')
twstrs_wide.head()
Out[191]:
In [192]:
cdystonia_long = cdystonia[['patient','site','id','treat','age','sex']].drop_duplicates().merge(
twstrs_wide, right_index=True, left_on='patient', how='inner').head()
cdystonia_long
Out[192]:
A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:
In [193]:
cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs'].unstack('week').head()
Out[193]:
To convert our "wide" format back to long, we can use the melt
function, appropriately parameterized:
In [194]:
pd.melt(cdystonia_long, id_vars=['patient','site','id','treat','age','sex'],
var_name='obs', value_name='twsters').head()
Out[194]:
This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.
The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.
There are a slew of additional operations for DataFrames that we would collectively refer to as "transformations" that include tasks such as removing duplicate values, replacing values, and grouping values.
We can easily identify and remove duplicate values from DataFrame
objects. For example, say we want to removed ships from our vessels
dataset that have the same name:
In [195]:
vessels.duplicated(cols='names')
Out[195]:
In [196]:
vessels.drop_duplicates(['names'])
Out[196]:
In [197]:
cdystonia.treat.value_counts()
Out[197]:
A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map
method to implement the changes.
In [198]:
treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}
In [199]:
cdystonia['treatment'] = cdystonia.treat.map(treatment_map)
cdystonia.treatment
Out[199]:
Alternately, if we simply want to replace particular values in a Series
or DataFrame
, we can use the replace
method.
In [200]:
cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})
Out[200]:
One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple:
In [201]:
cdystonia_grouped = cdystonia.groupby(cdystonia.patient)
This grouped dataset is hard to visualize
In [202]:
cdystonia_grouped
Out[202]:
However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups:
In [203]:
for patient, group in cdystonia_grouped:
print patient
print group
print
A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.
For example, we may want to aggregate our data with with some function.
We can aggregate in Pandas using the aggregate
(or agg
, for short) method:
In [204]:
cdystonia_grouped.agg(np.mean).head()
Out[204]:
Notice that the treat
and sex
variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.
Some aggregation functions are so common that Pandas has a convenience method for them, such as mean
:
In [205]:
cdystonia_grouped.mean().head()
Out[205]:
The add_prefix
and add_suffix
methods can be used to give the columns of the resulting table labels that reflect the transformation:
In [206]:
cdystonia_grouped.mean().add_suffix('_mean').head()
Out[206]:
In [207]:
# The median of the `twstrs` variable
cdystonia_grouped['twstrs'].quantile(0.5)
Out[207]:
If we wish, we can easily aggregate according to multiple keys:
In [208]:
cdystonia.groupby(['week','site']).mean().head()
Out[208]:
Alternately, we can transform the data, using a function of our choice with the transform
method:
In [209]:
normalize = lambda x: (x - x.mean())/x.std()
cdystonia_grouped.transform(normalize).head()
Out[209]:
It is easy to do column selection within groupby
operations, if we are only interested split-apply-combine operations on a subset of columns:
In [210]:
cdystonia_grouped['twstrs'].mean().head()
Out[210]:
If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:
In [211]:
chunks = dict(list(cdystonia_grouped))
chunks[4]
Out[211]:
By default, groupby
groups by row, but we can specify the axis
argument to change this. For example, we can group our columns by type this way:
In [212]:
dict(list(cdystonia.groupby(cdystonia.dtypes, axis=1)))
Out[212]:
We can generalize the split-apply-combine methodology by using apply
function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.
The function below takes a DataFrame and a column name, sorts by the column, and takes the n
largest values of that column. We can use this with apply
to return the largest values from every group in a DataFrame in a single call.
In [213]:
def top(df, column, n=5):
return df.sort_index(by=column, ascending=False)[:n]
To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged
). Say we wanted to return the 3 longest segments travelled by each ship:
In [214]:
top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]
top3segments
Out[214]:
Notice that additional arguments for the applied function can be passed via apply
after the function name. It assumes that the DataFrame is the first argument.
In [215]:
top3segments.head(20)
Out[215]: