Pandas

pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward this goal.

Source: http://pandas.pydata.org/pandas-docs/stable/

10 Minutes to pandas

This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook The original document is posted at http://pandas.pydata.org/pandas-docs/stable/10min.html

Customarily, we import as follows:


In [1]:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

Object Creation

See the Data Structure Intro section.

When we worked with Numpy arrays, one limitation was that a numpy ndarray can only hold one type of data (there are ways of getting around that, but they take away some of the nice advantages of numpy!). More often than not, real-world data sets contain mixed data that can have strings, dates, and numerical values for each "element". That is where pandas data frames show their real advantage.

Pandas has Series (1-dim labeled lists) and DataTables (2-dim tables with column names and row labels).

We'll start off by looking as Series.

Creating a Series by passing a list of values, letting pandas create a default integer index:


In [4]:
s = pd.Series([1,3,5,np.nan,6,8], index=[x**2 for x in range(6)])
s


Out[4]:
0     1.0
1     3.0
4     5.0
9     NaN
16    6.0
25    8.0
dtype: float64

Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns:


In [5]:
dates = pd.date_range('20130101', periods=6, freq='M')
dates


Out[5]:
DatetimeIndex(['2013-01-31', '2013-02-28', '2013-03-31', '2013-04-30',
               '2013-05-31', '2013-06-30'],
              dtype='datetime64[ns]', freq='M')

In [6]:
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns = ['Ann', "Bob", "Charly", "Don"])
                  ## columns=list('ABCD'))
df


Out[6]:
Ann Bob Charly Don
2013-01-31 -0.003502 1.152828 3.717634 0.037692
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-03-31 -0.390379 -0.438595 0.672244 0.081278
2013-04-30 -0.738945 0.605548 -0.581430 0.098590
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732
2013-06-30 2.067142 0.433863 -2.547680 0.424048

You can operate on the columns:


In [7]:
df.Charly+df.Don


Out[7]:
2013-01-31    3.755326
2013-02-28   -1.706512
2013-03-31    0.753523
2013-04-30   -0.482840
2013-05-31    0.119803
2013-06-30   -2.123632
Freq: M, dtype: float64

This example illustrates some objects that can be put into DataFrames (with dtype specification):


In [8]:
df2 = pd.DataFrame({ 'A' : 1.,
                    'B' : pd.Timestamp('20130102'),
                    'C' : pd.Series(1,index=list(['a','b','c','d']),dtype='float32'),
                    'D' : np.array([3] * 4,dtype='int32'),
                    'E' : pd.Categorical(["test","train","test","train"]),
                    'F' : 'foo' })
df2


Out[8]:
A B C D E F
a 1.0 2013-01-02 1.0 3 test foo
b 1.0 2013-01-02 1.0 3 train foo
c 1.0 2013-01-02 1.0 3 test foo
d 1.0 2013-01-02 1.0 3 train foo

In [9]:
df2.dtypes


Out[9]:
A           float64
B    datetime64[ns]
C           float32
D             int32
E          category
F            object
dtype: object

Another simple example:


In [10]:
df3 = pd.DataFrame(np.random.rand(3,4), columns=['ValuesA','ValuesB','C','D'], index=[x**2 for x in xrange(3)])
df3['CplusD'] = df3.C+df3.D
df3


Out[10]:
ValuesA ValuesB C D CplusD
0 0.056222 0.635249 0.952729 0.850519 1.803248
1 0.052966 0.774945 0.873052 0.488782 1.361834
4 0.897074 0.611696 0.520884 0.565424 1.086307

If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled. Here’s a subset of the attributes that will be completed:

In [13]: df2.<TAB>
df2.A                  df2.boxplot
df2.abs                df2.C
df2.add                df2.clip
df2.add_prefix         df2.clip_lower
df2.add_suffix         df2.clip_upper
df2.align              df2.columns
df2.all                df2.combine
df2.any                df2.combineAdd
df2.append             df2.combine_first
df2.apply              df2.combineMult
df2.applymap           df2.compound
df2.as_blocks          df2.consolidate
df2.asfreq             df2.convert_objects
df2.as_matrix          df2.copy
df2.astype             df2.corr
df2.at                 df2.corrwith
df2.at_time            df2.count
df2.axes               df2.cov
df2.B                  df2.cummax
df2.between_time       df2.cummin
df2.bfill              df2.cumprod
df2.blocks             df2.cumsum
df2.bool               df2.D
As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes have been truncated for brevity.

Viewing Data

See the Basics section

See the top & bottom rows of the frame


In [12]:
df.head(3)


Out[12]:
Ann Bob Charly Don
2013-01-31 -0.003502 1.152828 3.717634 0.037692
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-03-31 -0.390379 -0.438595 0.672244 0.081278

In [13]:
df.tail(3)


Out[13]:
Ann Bob Charly Don
2013-04-30 -0.738945 0.605548 -0.581430 0.098590
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732
2013-06-30 2.067142 0.433863 -2.547680 0.424048

In [14]:
df.index


Out[14]:
DatetimeIndex(['2013-01-31', '2013-02-28', '2013-03-31', '2013-04-30',
               '2013-05-31', '2013-06-30'],
              dtype='datetime64[ns]', freq='M')

In [15]:
df.columns


Out[15]:
Index([u'Ann', u'Bob', u'Charly', u'Don'], dtype='object')

In [16]:
df.values


Out[16]:
array([[ -3.50224419e-03,   1.15282819e+00,   3.71763446e+00,
          3.76917202e-02],
       [  8.92274647e-01,   3.43000342e-01,  -8.08302825e-01,
         -8.98209201e-01],
       [ -3.90379242e-01,  -4.38594525e-01,   6.72244191e-01,
          8.12784256e-02],
       [ -7.38945126e-01,   6.05548009e-01,  -5.81430120e-01,
          9.85898049e-02],
       [ -5.11257328e-02,  -1.59286606e+00,   1.80534564e-01,
         -6.07317067e-02],
       [  2.06714181e+00,   4.33862827e-01,  -2.54767969e+00,
          4.24047873e-01]])

In [17]:
df.describe()


Out[17]:
Ann Bob Charly Don
count 6.000000 6.000000 6.000000 6.000000
mean 0.295911 0.083963 0.105500 -0.052889
std 1.024198 0.968388 2.084256 0.445252
min -0.738945 -1.592866 -2.547680 -0.898209
25% -0.305566 -0.243196 -0.751585 -0.036126
50% -0.027314 0.388432 -0.200448 0.059485
75% 0.668330 0.562627 0.549317 0.094262
max 2.067142 1.152828 3.717634 0.424048

In [18]:
df2.columns


Out[18]:
Index([u'A', u'B', u'C', u'D', u'E', u'F'], dtype='object')

In [19]:
df2.dtypes


Out[19]:
A           float64
B    datetime64[ns]
C           float32
D             int32
E          category
F            object
dtype: object

In [20]:
df2.describe()


Out[20]:
A C D
count 4.0 4.0 4.0
mean 1.0 1.0 3.0
std 0.0 0.0 0.0
min 1.0 1.0 3.0
25% 1.0 1.0 3.0
50% 1.0 1.0 3.0
75% 1.0 1.0 3.0
max 1.0 1.0 3.0

Transposing your data


In [21]:
df.T


Out[21]:
2013-01-31 00:00:00 2013-02-28 00:00:00 2013-03-31 00:00:00 2013-04-30 00:00:00 2013-05-31 00:00:00 2013-06-30 00:00:00
Ann -0.003502 0.892275 -0.390379 -0.738945 -0.051126 2.067142
Bob 1.152828 0.343000 -0.438595 0.605548 -1.592866 0.433863
Charly 3.717634 -0.808303 0.672244 -0.581430 0.180535 -2.547680
Don 0.037692 -0.898209 0.081278 0.098590 -0.060732 0.424048

Sorting by an axis


In [23]:
df.sort_index(axis=0, ascending=False) # axis = 0 => sort rows, 1 => sort columns


Out[23]:
Ann Bob Charly Don
2013-06-30 2.067142 0.433863 -2.547680 0.424048
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732
2013-04-30 -0.738945 0.605548 -0.581430 0.098590
2013-03-31 -0.390379 -0.438595 0.672244 0.081278
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-01-31 -0.003502 1.152828 3.717634 0.037692

Sorting by values


In [27]:
df.sort_index(axis=0).sort_values(by=['Bob','Ann'], ascending=True)


Out[27]:
Ann Bob Charly Don
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732
2013-03-31 -0.390379 -0.438595 0.672244 0.081278
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-06-30 2.067142 0.433863 -2.547680 0.424048
2013-04-30 -0.738945 0.605548 -0.581430 0.098590
2013-01-31 -0.003502 1.152828 3.717634 0.037692

Selection

Note: While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc, .iloc and .ix.

See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing.

Basically: loc is for label-based indexing; iloc is for indexing by row number; ix first tries to use labels, and if it fails, it goes to integer row-number indexing. These methods can all be used to select several elements at once. By contrast, at provides fast label-based indexing for a single element, and iat provides fast row-number indexing for a single element.

Getting

Selecting a single column, which yields a Series, equivalent to df.Ann


In [29]:
df.Ann


Out[29]:
2013-01-31   -0.003502
2013-02-28    0.892275
2013-03-31   -0.390379
2013-04-30   -0.738945
2013-05-31   -0.051126
2013-06-30    2.067142
Freq: M, Name: Ann, dtype: float64

In [28]:
df['Ann']


Out[28]:
2013-01-31   -0.003502
2013-02-28    0.892275
2013-03-31   -0.390379
2013-04-30   -0.738945
2013-05-31   -0.051126
2013-06-30    2.067142
Freq: M, Name: Ann, dtype: float64

Selecting via [], which slices the rows.


In [30]:
df[1:3]


Out[30]:
Ann Bob Charly Don
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-03-31 -0.390379 -0.438595 0.672244 0.081278

In [33]:
df['20130228':'20130331']


Out[33]:
Ann Bob Charly Don
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-03-31 -0.390379 -0.438595 0.672244 0.081278

Selection by Label

See more in Selection by Label

For getting a cross section using a label


In [34]:
dates


Out[34]:
DatetimeIndex(['2013-01-31', '2013-02-28', '2013-03-31', '2013-04-30',
               '2013-05-31', '2013-06-30'],
              dtype='datetime64[ns]', freq='M')

In [35]:
dates[0]


Out[35]:
Timestamp('2013-01-31 00:00:00', freq='M')

In [36]:
df


Out[36]:
Ann Bob Charly Don
2013-01-31 -0.003502 1.152828 3.717634 0.037692
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-03-31 -0.390379 -0.438595 0.672244 0.081278
2013-04-30 -0.738945 0.605548 -0.581430 0.098590
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732
2013-06-30 2.067142 0.433863 -2.547680 0.424048

In [38]:
df.loc[dates[0:3]]


Out[38]:
Ann Bob Charly Don
2013-01-31 -0.003502 1.152828 3.717634 0.037692
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-03-31 -0.390379 -0.438595 0.672244 0.081278

Selecting on a multi-axis by label


In [39]:
df.loc[:,['Ann','Bob']]


Out[39]:
Ann Bob
2013-01-31 -0.003502 1.152828
2013-02-28 0.892275 0.343000
2013-03-31 -0.390379 -0.438595
2013-04-30 -0.738945 0.605548
2013-05-31 -0.051126 -1.592866
2013-06-30 2.067142 0.433863

Showing label slicing, both endpoints are included


In [40]:
df.loc['20130131':'20130430',['Ann','Bob']]


Out[40]:
Ann Bob
2013-01-31 -0.003502 1.152828
2013-02-28 0.892275 0.343000
2013-03-31 -0.390379 -0.438595
2013-04-30 -0.738945 0.605548

Reduction in the dimensions of the returned object


In [43]:
type(df.loc['20130131',['Ann','Bob']])


Out[43]:
pandas.core.series.Series

For getting a scalar value


In [44]:
%time df.loc[dates[0],'Ann']


CPU times: user 779 µs, sys: 0 ns, total: 779 µs
Wall time: 602 µs
Out[44]:
-0.0035022441885232273

For getting fast access to a scalar (equiv to the prior method)


In [45]:
%time df.at[dates[0],'Ann']


CPU times: user 537 µs, sys: 0 ns, total: 537 µs
Wall time: 191 µs
Out[45]:
-0.0035022441885232273

Selection by Position

See more in Selection by Position

Select via the position of the passed integers


In [46]:
df


Out[46]:
Ann Bob Charly Don
2013-01-31 -0.003502 1.152828 3.717634 0.037692
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-03-31 -0.390379 -0.438595 0.672244 0.081278
2013-04-30 -0.738945 0.605548 -0.581430 0.098590
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732
2013-06-30 2.067142 0.433863 -2.547680 0.424048

In [47]:
df.iloc[3]


Out[47]:
Ann      -0.738945
Bob       0.605548
Charly   -0.581430
Don       0.098590
Name: 2013-04-30 00:00:00, dtype: float64

By integer slices, acting similar to numpy/python


In [48]:
df.iloc[3:5,0:2]


Out[48]:
Ann Bob
2013-04-30 -0.738945 0.605548
2013-05-31 -0.051126 -1.592866

By lists of integer position locations, similar to the numpy/python style


In [49]:
df.iloc[[1,2,4],[0,2]]


Out[49]:
Ann Charly
2013-02-28 0.892275 -0.808303
2013-03-31 -0.390379 0.672244
2013-05-31 -0.051126 0.180535

For slicing rows explicitly


In [50]:
df.iloc[1:3,:]


Out[50]:
Ann Bob Charly Don
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-03-31 -0.390379 -0.438595 0.672244 0.081278

For slicing columns explicitly


In [51]:
df.iloc[:,1:3]


Out[51]:
Bob Charly
2013-01-31 1.152828 3.717634
2013-02-28 0.343000 -0.808303
2013-03-31 -0.438595 0.672244
2013-04-30 0.605548 -0.581430
2013-05-31 -1.592866 0.180535
2013-06-30 0.433863 -2.547680

For getting a value explicitly


In [52]:
%time print df.iloc[1,1]

#For getting fast access to a scalar (equiv to the prior method)

%time print df.iat[1,1]


0.343000341998
CPU times: user 1.75 ms, sys: 0 ns, total: 1.75 ms
Wall time: 1.61 ms
0.343000341998
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 98 µs

In [ ]:
# in-class exercise:
# Show all values of "Bob" and "Don" for March-May 2013.

In [58]:
df


Out[58]:
Ann Bob Charly Don
2013-01-31 -0.003502 1.152828 3.717634 0.037692
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-03-31 -0.390379 -0.438595 0.672244 0.081278
2013-04-30 -0.738945 0.605548 -0.581430 0.098590
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732
2013-06-30 2.067142 0.433863 -2.547680 0.424048

In [60]:
dates[np.logical_and(dates>='20130301', dates<'20130601')]


Out[60]:
DatetimeIndex(['2013-03-31', '2013-04-30', '2013-05-31'], dtype='datetime64[ns]', freq='M')

In [67]:
df.ix[dates[np.logical_and(dates>='20130301', dates<'20130601')], ['Bob','Don']]


Out[67]:
Bob Don
2013-03-31 -0.438595 0.081278
2013-04-30 0.605548 0.098590
2013-05-31 -1.592866 -0.060732

Boolean Indexing

Using a single column’s values to select data.


In [69]:
df.Ann


Out[69]:
2013-01-31   -0.003502
2013-02-28    0.892275
2013-03-31   -0.390379
2013-04-30   -0.738945
2013-05-31   -0.051126
2013-06-30    2.067142
Freq: M, Name: Ann, dtype: float64

In [70]:
flt = (df.Ann >= 0) & (df.Ann < 1.5) 
flt


Out[70]:
2013-01-31    False
2013-02-28     True
2013-03-31    False
2013-04-30    False
2013-05-31    False
2013-06-30    False
Freq: M, Name: Ann, dtype: bool

In [71]:
df[flt]


Out[71]:
Ann Bob Charly Don
2013-02-28 0.892275 0.343 -0.808303 -0.898209

In [72]:
df[(df.Ann >= 0) & (df.Ann < 1.5)]


Out[72]:
Ann Bob Charly Don
2013-02-28 0.892275 0.343 -0.808303 -0.898209

A where operation for getting.


In [75]:
df5=df[df > 0]
df5


Out[75]:
Ann Bob Charly Don
2013-01-31 NaN 1.152828 3.717634 0.037692
2013-02-28 0.892275 0.343000 NaN NaN
2013-03-31 NaN NaN 0.672244 0.081278
2013-04-30 NaN 0.605548 NaN 0.098590
2013-05-31 NaN NaN 0.180535 NaN
2013-06-30 2.067142 0.433863 NaN 0.424048

In [76]:
df5.fillna(99999)


Out[76]:
Ann Bob Charly Don
2013-01-31 99999.000000 1.152828 3.717634 0.037692
2013-02-28 0.892275 0.343000 99999.000000 99999.000000
2013-03-31 99999.000000 99999.000000 0.672244 0.081278
2013-04-30 99999.000000 0.605548 99999.000000 0.098590
2013-05-31 99999.000000 99999.000000 0.180535 99999.000000
2013-06-30 2.067142 0.433863 99999.000000 0.424048

Using the isin() method for filtering:


In [77]:
df2 = df.copy()
df2['E'] = ['one', 'one','two','three','four','three']
df2


Out[77]:
Ann Bob Charly Don E
2013-01-31 -0.003502 1.152828 3.717634 0.037692 one
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 one
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 two
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 three
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732 four
2013-06-30 2.067142 0.433863 -2.547680 0.424048 three

In [78]:
df


Out[78]:
Ann Bob Charly Don
2013-01-31 -0.003502 1.152828 3.717634 0.037692
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-03-31 -0.390379 -0.438595 0.672244 0.081278
2013-04-30 -0.738945 0.605548 -0.581430 0.098590
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732
2013-06-30 2.067142 0.433863 -2.547680 0.424048

In [79]:
df2[df2['E'].isin(['two','three'])]


Out[79]:
Ann Bob Charly Don E
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 two
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 three
2013-06-30 2.067142 0.433863 -2.547680 0.424048 three

In [ ]:
# in-class exercise: 
# Display all rows of df where Ann > Charly.

In [80]:
df[df.Ann>df.Charly]


Out[80]:
Ann Bob Charly Don
2013-02-28 0.892275 0.343000 -0.808303 -0.898209
2013-06-30 2.067142 0.433863 -2.547680 0.424048

Setting

Setting a new column automatically aligns the data by the indexes


In [81]:
s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102', periods=6))
s1


Out[81]:
2013-01-02    1
2013-01-03    2
2013-01-04    3
2013-01-05    4
2013-01-06    5
2013-01-07    6
Freq: D, dtype: int64

In [82]:
df['F'] = s1
df['G'] = df['Ann']-df['Bob']
df


Out[82]:
Ann Bob Charly Don F G
2013-01-31 -0.003502 1.152828 3.717634 0.037692 NaN -1.156330
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 NaN 0.549274
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 NaN 0.048215
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 NaN -1.344493
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732 NaN 1.541740
2013-06-30 2.067142 0.433863 -2.547680 0.424048 NaN 1.633279

In [ ]:
# in-class exercise: how would you change the index in s1 so that we get values for F in df? 
#(note: that's probably not a good way to manipulate real data!)

In [84]:
s1.index = df.index

In [85]:
df['F'] = s1
df


Out[85]:
Ann Bob Charly Don F G
2013-01-31 -0.003502 1.152828 3.717634 0.037692 1 -1.156330
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 2 0.549274
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 3 0.048215
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 4 -1.344493
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732 5 1.541740
2013-06-30 2.067142 0.433863 -2.547680 0.424048 6 1.633279

Setting values by label


In [86]:
df.at[dates[0],'Ann'] = 17.6
df


Out[86]:
Ann Bob Charly Don F G
2013-01-31 17.600000 1.152828 3.717634 0.037692 1 -1.156330
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 2 0.549274
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 3 0.048215
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 4 -1.344493
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732 5 1.541740
2013-06-30 2.067142 0.433863 -2.547680 0.424048 6 1.633279

Setting values by position


In [87]:
df.iat[5,2] = 349
df


Out[87]:
Ann Bob Charly Don F G
2013-01-31 17.600000 1.152828 3.717634 0.037692 1 -1.156330
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 2 0.549274
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 3 0.048215
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 4 -1.344493
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732 5 1.541740
2013-06-30 2.067142 0.433863 349.000000 0.424048 6 1.633279

Setting by assigning with a numpy array


In [88]:
df.loc[:,'Fives'] = np.array([5] * len(df))
df


Out[88]:
Ann Bob Charly Don F G Fives
2013-01-31 17.600000 1.152828 3.717634 0.037692 1 -1.156330 5
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 2 0.549274 5
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 3 0.048215 5
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 4 -1.344493 5
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732 5 1.541740 5
2013-06-30 2.067142 0.433863 349.000000 0.424048 6 1.633279 5

A where operation with setting.


In [89]:
df2 = df.copy()
df2[df2 < 0] = -df2
df2


Out[89]:
Ann Bob Charly Don F G Fives
2013-01-31 17.600000 1.152828 3.717634 0.037692 1 1.156330 5
2013-02-28 0.892275 0.343000 0.808303 0.898209 2 0.549274 5
2013-03-31 0.390379 0.438595 0.672244 0.081278 3 0.048215 5
2013-04-30 0.738945 0.605548 0.581430 0.098590 4 1.344493 5
2013-05-31 0.051126 1.592866 0.180535 0.060732 5 1.541740 5
2013-06-30 2.067142 0.433863 349.000000 0.424048 6 1.633279 5

Missing Data

pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section

Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.


In [95]:
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1


Out[95]:
Ann Bob Charly Don F G Fives E
2013-01-31 17.600000 1.152828 3.717634 0.037692 1 -1.156330 5 NaN
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 2 0.549274 5 NaN
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 3 0.048215 5 NaN
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 4 -1.344493 5 NaN

In [96]:
df1.loc[dates[0]:dates[1],'E'] = 1
df1


Out[96]:
Ann Bob Charly Don F G Fives E
2013-01-31 17.600000 1.152828 3.717634 0.037692 1 -1.156330 5 1.0
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 2 0.549274 5 1.0
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 3 0.048215 5 NaN
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 4 -1.344493 5 NaN

To drop any rows that have missing data.


In [97]:
df1.dropna(how='any')


Out[97]:
Ann Bob Charly Don F G Fives E
2013-01-31 17.600000 1.152828 3.717634 0.037692 1 -1.156330 5 1.0
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 2 0.549274 5 1.0

Filling missing data


In [98]:
df1


Out[98]:
Ann Bob Charly Don F G Fives E
2013-01-31 17.600000 1.152828 3.717634 0.037692 1 -1.156330 5 1.0
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 2 0.549274 5 1.0
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 3 0.048215 5 NaN
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 4 -1.344493 5 NaN

In [99]:
df1.fillna(value=5)


Out[99]:
Ann Bob Charly Don F G Fives E
2013-01-31 17.600000 1.152828 3.717634 0.037692 1 -1.156330 5 1.0
2013-02-28 0.892275 0.343000 -0.808303 -0.898209 2 0.549274 5 1.0
2013-03-31 -0.390379 -0.438595 0.672244 0.081278 3 0.048215 5 5.0
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 4 -1.344493 5 5.0

To get the boolean mask where values are nan


In [100]:
pd.isnull(df1)


Out[100]:
Ann Bob Charly Don F G Fives E
2013-01-31 False False False False False False False False
2013-02-28 False False False False False False False False
2013-03-31 False False False False False False False True
2013-04-30 False False False False False False False True

In [108]:
df.set_value(dates[:3], 'Ann', [0,1,2])


Out[108]:
Ann Bob Charly Don F G Fives
2013-01-31 0.000000 1.152828 3.717634 0.037692 1 -1.156330 5
2013-02-28 1.000000 0.343000 -0.808303 -0.898209 2 0.549274 5
2013-03-31 2.000000 -0.438595 0.672244 0.081278 3 0.048215 5
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 4 -1.344493 5
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732 5 1.541740 5
2013-06-30 2.067142 0.433863 349.000000 0.424048 6 1.633279 5

Operations

See the Basic section on Binary Ops

Stats

Operations in general exclude missing data.

Performing a descriptive statistic


In [110]:
df.mean(0)


Out[110]:
Ann        0.712845
Bob        0.083963
Charly    58.696780
Don       -0.052889
F          3.500000
G          0.211948
Fives      5.000000
dtype: float64

Same operation on the other axis


In [111]:
df.mean(1)


Out[111]:
2013-01-31     1.393118
2013-02-28     1.026538
2013-03-31     1.480449
2013-04-30     1.005610
2013-05-31     1.431079
2013-06-30    52.079762
Freq: M, dtype: float64

Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.


In [109]:
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2)
s


Out[109]:
2013-01-31    NaN
2013-02-28    NaN
2013-03-31    1.0
2013-04-30    3.0
2013-05-31    5.0
2013-06-30    NaN
Freq: M, dtype: float64

In [ ]:
df.sub(s, axis='index')

Apply

Applying functions to the data


In [ ]:
df

In [112]:
df.apply(np.cumsum)


Out[112]:
Ann Bob Charly Don F G Fives
2013-01-31 0.000000 1.152828 3.717634 0.037692 1 -1.156330 5
2013-02-28 1.000000 1.495829 2.909332 -0.860517 3 -0.607056 10
2013-03-31 3.000000 1.057234 3.581576 -0.779239 6 -0.558841 15
2013-04-30 2.261055 1.662782 3.000146 -0.680649 10 -1.903334 20
2013-05-31 2.209929 0.069916 3.180680 -0.741381 15 -0.361594 25
2013-06-30 4.277071 0.503779 352.180680 -0.317333 21 1.271685 30

In [113]:
df


Out[113]:
Ann Bob Charly Don F G Fives
2013-01-31 0.000000 1.152828 3.717634 0.037692 1 -1.156330 5
2013-02-28 1.000000 0.343000 -0.808303 -0.898209 2 0.549274 5
2013-03-31 2.000000 -0.438595 0.672244 0.081278 3 0.048215 5
2013-04-30 -0.738945 0.605548 -0.581430 0.098590 4 -1.344493 5
2013-05-31 -0.051126 -1.592866 0.180535 -0.060732 5 1.541740 5
2013-06-30 2.067142 0.433863 349.000000 0.424048 6 1.633279 5

In [114]:
df.max() ##df.apply(max)


Out[114]:
Ann         2.067142
Bob         1.152828
Charly    349.000000
Don         0.424048
F           6.000000
G           1.633279
Fives       5.000000
dtype: float64

In [115]:
df.Ann.max()-df.Ann.min()


Out[115]:
2.8060869328012688

In [116]:
df.apply(lambda x: x.max() - x.min())


Out[116]:
Ann         2.806087
Bob         2.745694
Charly    349.808303
Don         1.322257
F           5.000000
G           2.977772
Fives       0.000000
dtype: float64

Histogramming

See more at Histogramming and Discretization


In [ ]:
s = pd.Series(np.random.randint(0, 7, size=10))
s

In [ ]:
s.value_counts()

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.


In [ ]:
s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s.str.lower()

Merge

Concat

pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.

See the Merging section

Concatenating pandas objects together with concat():


In [ ]:
df = pd.DataFrame(np.random.randn(10, 4))
df

In [ ]:
# break it into pieces
pieces = [df[:3], df[3:7], df[7:]]
print "pieces:\n", pieces
print "put back together:\n"
pd.concat(pieces)

Join

SQL style merges. See the Database style joining


In [117]:
left = pd.DataFrame({'key': ['foo', 'boo', 'foo'], 'lval': [1, 2, 3]})
#right = pd.DataFrame({'key': ['boo', 'foo', 'foo'], 'rval': [4, 5, 6]})
right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [5, 6]})

In [118]:
left


Out[118]:
key lval
0 foo 1
1 boo 2
2 foo 3

In [119]:
right


Out[119]:
key rval
0 foo 5
1 foo 6

In [121]:
pd.merge(left, right, on='key', how='right')


Out[121]:
key lval rval
0 foo 1 5
1 foo 3 5
2 foo 1 6
3 foo 3 6

Append

Append rows to a dataframe. See the Appending


In [ ]:
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df

In [ ]:
s = df.iloc[3]
df.append(s, ignore_index=True)

Grouping

By “group by” we are referring to a process involving one or more of the following steps

  • Splitting the data into groups based on some criteria
  • Applying a function to each group independently
  • Combining the results into a data structure

See the Grouping section


In [122]:
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'],
                   'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'],
                   'C' : np.random.randn(8),
                   'D' : np.random.randn(8)})

df


Out[122]:
A B C D
0 foo one 0.727334 -0.121076
1 bar one -0.376269 -0.551921
2 foo two 0.347377 -1.324145
3 bar three -1.800360 1.362982
4 foo two -1.607225 -0.278323
5 bar two 0.155321 1.141374
6 foo one -0.058291 0.749650
7 foo three 0.285056 0.796032

Grouping and then applying a function sum to the resulting groups.


In [124]:
df.groupby('A').mean()


Out[124]:
C D
A
bar -0.673769 0.650812
foo -0.061150 -0.035573

Grouping by multiple columns forms a hierarchical index, which we then apply the function.


In [125]:
df.groupby(['A','B']).sum()


Out[125]:
C D
A B
bar one -0.376269 -0.551921
three -1.800360 1.362982
two 0.155321 1.141374
foo one 0.669043 0.628573
three 0.285056 0.796032
two -1.259848 -1.602468

Reshaping

See the sections on Hierarchical Indexing and Reshaping.

Stack


In [ ]:
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
                    ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]))

index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
index

In [ ]:
df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
df

In [ ]:
df2 = df[:4]
df2

The stack() method “compresses” a level in the DataFrame’s columns.


In [ ]:
stacked = df2.stack()
stacked

With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level:


In [ ]:
stacked.unstack()

In [ ]:
stacked.unstack(1)

In [ ]:
stacked.unstack(0)

Pivot Tables

See the section on Pivot Tables.

In [100]:


In [ ]:
df = pd.DataFrame({'ModelNumber' : ['one', 'one', 'two', 'three'] * 3,
                   'Submodel' : ['A', 'B', 'C'] * 4,
                   'Type' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
                   'Xval' : np.random.randn(12),
                   'Yval' : np.random.randn(12)})

df

We can produce pivot tables from this data very easily:


In [ ]:
pd.pivot_table(df, values='Xval', index=['ModelNumber', 'Submodel'], columns=['Type'])

Time Series

pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications. See the Time Series section


In [ ]:
rng = pd.date_range('1/1/2012', periods=100, freq='S')
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
ts.resample('5Min').sum()

In [ ]:
Out[105]: 
2012-01-01    25083
Freq: 5T, dtype: int64

Time zone representation


In [ ]:
rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
ts = pd.Series(np.random.randn(len(rng)), rng)
ts

In [ ]:
ts_utc = ts.tz_localize('UTC')
ts_utc

Convert to another time zone


In [ ]:
ts_utc.tz_convert('US/Eastern')

Converting between time span representations


In [ ]:
rng = pd.date_range('1/1/2012', periods=5, freq='M')
ts = pd.Series(np.random.randn(len(rng)), index=rng)
ts

In [ ]:
ps = ts.to_period()
ps

In [ ]:
ps.to_timestamp()

Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:


In [ ]:
prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
ts = pd.Series(np.random.randn(len(prng)), prng)
ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
ts.head()

Categoricals

Since version 0.15, pandas can include categorical data in a DataFrame. For full docs, see the categorical introduction and the API documentation.


In [ ]:
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})

Convert the raw grades to a categorical data type.


In [ ]:
df["grade"] = df["raw_grade"].astype("category")
df["grade"]

In [ ]:
Out[124]: 
0    a
1    b
2    b
3    a
4    a
5    e
Name: grade, dtype: category
Categories (3, object): [a, b, e]

Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!)


In [ ]:
df["grade"].cat.categories = ["very good", "good", "very bad"]

Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series per default).


In [ ]:
df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
df["grade"]

Sorting is per order in the categories, not lexical order.


In [ ]:
df.sort_values(by="grade")

In [ ]:
Out[128]: 
   id raw_grade      grade
5   6         e   very bad
1   2         b       good
2   3         b       good
0   1         a  very good
3   4         a  very good
4   5         a  very good

Grouping by a categorical column shows also empty categories.


In [ ]:
df.groupby("grade").size()

Plotting

Plotting docs.


In [ ]:
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()

In [ ]:
%matplotlib inline
ts.plot()

On DataFrame, plot() is a convenience to plot all of the columns with labels:


In [ ]:
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
   .....:                   columns=['A', 'B', 'C', 'D'])
   .....: 

df = df.cumsum()

%matplotlib inline
plt.figure(); df.plot(); plt.legend(loc='best')

Getting Data In/Out

CSV

Writing to a csv file


In [ ]:
df.to_csv('foo.csv')

In [ ]:
pd.read_csv('foo.csv')

HDF5

Reading and writing to HDFStores

Writing to a HDF5 Store


In [ ]:
## df.to_hdf('foo.h5','df')

Reading from a HDF5 Store


In [ ]:
## pd.read_hdf('foo.h5','df')

Excel

Reading and writing to MS Excel

Writing to an excel file


In [ ]:
df.to_excel('foo.xlsx', sheet_name='Sheet1')

Reading from an excel file


In [ ]:
pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])

Gotchas

If you are trying an operation and you see an exception like:

    
>>> if pd.Series([False, True, False]):
    print("I was true")
Traceback
    ...

ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().

See Comparisons for an explanation and what to do.

See Gotchas as well.


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:
np.array([row[10] for row in array])