Pandas

pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward this goal.

Source: http://pandas.pydata.org/pandas-docs/stable/

10 Minutes to pandas

This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook The original document is posted at http://pandas.pydata.org/pandas-docs/stable/10min.html

Customarily, we import as follows:


In [1]:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

Object Creation

See the Data Structure Intro section

Creating a Series by passing a list of values, letting pandas create a default integer index:


In [ ]:
s = pd.Series([1,3,5,np.nan,6,8])
s

Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns:


In [3]:
dates = pd.date_range('20130101', periods=6, freq='M')
dates


Out[3]:
DatetimeIndex(['2013-01-31', '2013-02-28', '2013-03-31', '2013-04-30',
               '2013-05-31', '2013-06-30'],
              dtype='datetime64[ns]', freq='M')

In [4]:
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns = ['Ann', "Bob", "Charly", "Don"])
                  ## columns=list('ABCD'))
df


Out[4]:
Ann Bob Charly Don
2013-01-31 0.991022 -0.799177 0.814098 -1.342932
2013-02-28 1.113361 -0.613726 -1.623254 1.634997
2013-03-31 0.996475 0.552436 1.066252 2.104424
2013-04-30 -0.285157 0.903410 -0.288100 0.717859
2013-05-31 0.931741 0.759013 -1.110726 2.011250
2013-06-30 0.396927 -0.090919 0.973643 0.534363

In [ ]:
df.Charly+df.Don

In [5]:
df2 = pd.DataFrame({ 'A' : 1.,
   ....:                      'B' : pd.Timestamp('20130102'),
   ....:                      'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
   ....:                      'D' : np.array([3] * 4,dtype='int32'),
   ....:                      'E' : pd.Categorical(["test","train","test","train"]),
   ....:                      'F' : 'foo' })
   ....: 

df2


Out[5]:
A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo

Having specific dtypes


In [ ]:
df2.dtypes

If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled. Here’s a subset of the attributes that will be completed:

In [13]: df2.<TAB>
df2.A                  df2.boxplot
df2.abs                df2.C
df2.add                df2.clip
df2.add_prefix         df2.clip_lower
df2.add_suffix         df2.clip_upper
df2.align              df2.columns
df2.all                df2.combine
df2.any                df2.combineAdd
df2.append             df2.combine_first
df2.apply              df2.combineMult
df2.applymap           df2.compound
df2.as_blocks          df2.consolidate
df2.asfreq             df2.convert_objects
df2.as_matrix          df2.copy
df2.astype             df2.corr
df2.at                 df2.corrwith
df2.at_time            df2.count
df2.axes               df2.cov
df2.B                  df2.cummax
df2.between_time       df2.cummin
df2.bfill              df2.cumprod
df2.blocks             df2.cumsum
df2.bool               df2.D
As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes have been truncated for brevity.

Viewing Data

See the Basics section

See the top & bottom rows of the frame


In [ ]:
df.head()

In [ ]:
df.tail(3)

In [ ]:
df.index

In [ ]:
df.columns

In [ ]:
df.values

In [ ]:
df.describe()

In [ ]:
df2.columns

In [ ]:
df2.dtypes

In [ ]:
df2.describe()

Transposing your data


In [ ]:
df.T

Sorting by an axis


In [ ]:
df.sort_index(axis=1, ascending=False)

Sorting by values


In [ ]:
df.sort_values(by='Bob', ascending=True)

Selection

Note: While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc, .iloc and .ix.

See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing

Getting

Selecting a single column, which yields a Series, equivalent to df.A


In [ ]:
df['Ann']

Selecting via [], which slices the rows.


In [ ]:
df[1:3]

In [ ]:
df['20130102':'20130104']

Selection by Label

See more in Selection by Label

For getting a cross section using a label


In [ ]:
dates[0]

In [ ]:
df.loc[dates[0]]

Selecting on a multi-axis by label


In [ ]:
df.loc[:,['Ann','Bob']]

Showing label slicing, both endpoints are included


In [ ]:
df.loc['20130102':'20130104',['A','B']]

Reduction in the dimensions of the returned object


In [ ]:
df.loc['20130102',['A','B']]

For getting a scalar value


In [ ]:
df.loc[dates[0],'A']

For getting fast access to a scalar (equiv to the prior method)


In [ ]:
df.at[dates[0],'A']

Selection by Position

See more in Selection by Position

Select via the position of the passed integers


In [ ]:
df.iloc[3]

By integer slices, acting similar to numpy/python


In [ ]:
df.iloc[3:5,0:2]

By lists of integer position locations, similar to the numpy/python style


In [ ]:
df.iloc[[1,2,4],[0,2]]

For slicing rows explicitly


In [ ]:
df.iloc[1:3,:]

For slicing columns explicitly


In [ ]:
df.iloc[:,1:3]

For getting a value explicitly


In [ ]:
In [37]: df.iloc[1,1]
Out[37]: -0.17321464905330858
For getting fast access to a scalar (equiv to the prior method)

In [38]: df.iat[1,1]
Out[38]: -0.17321464905330858

Boolean Indexing

Using a single column’s values to select data.


In [ ]:
flt = (df.Ann >= 0.5) & (df.Ann < 1.5)

In [ ]:
df[flt]

In [ ]:
In [39]: df[(df.Ann >= 0.5) & (df.Ann < 1.5)]

A where operation for getting.


In [ ]:
df[df > 0]

Using the isin() method for filtering:


In [ ]:
df2 = df.copy()
df2['E'] = ['one', 'one','two','three','four','three']
df2

In [ ]:
df2[df2['E'].isin(['two','four'])]

Setting

Setting a new column automatically aligns the data by the indexes


In [ ]:
s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102', periods=6))
s1

In [ ]:
df['F'] = s1
df['G'] = df['Ann']-df['Bob']
df

In [ ]:
df.G

Setting values by label


In [ ]:
df.at[dates[0],'Ann'] = 17.6
df

Setting values by position


In [ ]:
df.iat[5,2] = 349
df

Setting by assigning with a numpy array


In [ ]:
df.loc[:,'D'] = np.array([5] * len(df))

The result of the prior setting operations


In [ ]:
df

A where operation with setting.


In [ ]:
df2 = df.copy()
df2[df2 > 0] = -df2
df2

Missing Data

pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section

Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.


In [7]:
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1.loc[dates[0]:dates[1],'E'] = 1
df1


Out[7]:
Ann Bob Charly Don E
2013-01-31 0.991022 -0.799177 0.814098 -1.342932 1.0
2013-02-28 1.113361 -0.613726 -1.623254 1.634997 1.0
2013-03-31 0.996475 0.552436 1.066252 2.104424 NaN
2013-04-30 -0.285157 0.903410 -0.288100 0.717859 NaN

To drop any rows that have missing data.


In [8]:
df1.dropna(how='any')


Out[8]:
Ann Bob Charly Don E
2013-01-31 0.991022 -0.799177 0.814098 -1.342932 1.0
2013-02-28 1.113361 -0.613726 -1.623254 1.634997 1.0

Filling missing data


In [9]:
df1


Out[9]:
Ann Bob Charly Don E
2013-01-31 0.991022 -0.799177 0.814098 -1.342932 1.0
2013-02-28 1.113361 -0.613726 -1.623254 1.634997 1.0
2013-03-31 0.996475 0.552436 1.066252 2.104424 NaN
2013-04-30 -0.285157 0.903410 -0.288100 0.717859 NaN

In [10]:
df1.fillna(value=5)


Out[10]:
Ann Bob Charly Don E
2013-01-31 0.991022 -0.799177 0.814098 -1.342932 1.0
2013-02-28 1.113361 -0.613726 -1.623254 1.634997 1.0
2013-03-31 0.996475 0.552436 1.066252 2.104424 5.0
2013-04-30 -0.285157 0.903410 -0.288100 0.717859 5.0

To get the boolean mask where values are nan


In [11]:
pd.isnull(df1)


Out[11]:
Ann Bob Charly Don E
2013-01-31 False False False False False
2013-02-28 False False False False False
2013-03-31 False False False False True
2013-04-30 False False False False True

Operations

See the Basic section on Binary Ops

Stats

Operations in general exclude missing data.

Performing a descriptive statistic


In [ ]:
df.mean(0)

Same operation on the other axis


In [ ]:
df.mean(1)

Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.


In [ ]:
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2)
s

In [ ]:
df.sub(s, axis='index')

Apply

Applying functions to the data


In [ ]:
df

In [ ]:
df.apply(np.cumsum)

In [ ]:
df

In [ ]:
df.max() ##df.apply(max)

In [ ]:
df.Ann.max()-df.Ann.min()

In [ ]:
df.apply(lambda x: x.max() - x.min())

Histogramming

See more at Histogramming and Discretization


In [ ]:
s = pd.Series(np.random.randint(0, 7, size=10))
s

In [ ]:
s.value_counts()

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.


In [ ]:
s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s.str.lower()

Merge

Concat

pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.

See the Merging section

Concatenating pandas objects together with concat():


In [ ]:
df = pd.DataFrame(np.random.randn(10, 4))
df

In [ ]:
# break it into pieces
pieces = [df[:3], df[3:7], df[7:]]
pd.concat(pieces)

Join

SQL style merges. See the Database style joining


In [ ]:
left = pd.DataFrame({'key': ['foo', 'boo', 'foo'], 'lval': [1, 2, 3]})
#right = pd.DataFrame({'key': ['boo', 'foo', 'foo'], 'rval': [4, 5, 6]})
right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [5, 6]})

In [ ]:
left

In [ ]:
right

In [ ]:
pd.merge(left, right, on='key', how='left')

Append

Append rows to a dataframe. See the Appending


In [ ]:
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df

In [ ]:
s = df.iloc[3]
df.append(s, ignore_index=True)

Grouping

By “group by” we are referring to a process involving one or more of the following steps

  • Splitting the data into groups based on some criteria
  • Applying a function to each group independently
  • Combining the results into a data structure

See the Grouping section


In [ ]:
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
   ....:                           'foo', 'bar', 'foo', 'foo'],
   ....:                    'B' : ['one', 'one', 'two', 'three',
   ....:                           'two', 'two', 'one', 'three'],
   ....:                    'C' : np.random.randn(8),
   ....:                    'D' : np.random.randn(8)})
   ....: 

df

Grouping and then applying a function sum to the resulting groups.


In [ ]:
df.groupby('A').sum()

Grouping by multiple columns forms a hierarchical index, which we then apply the function.


In [ ]:
df.groupby(['A','B']).sum()

Reshaping

See the sections on Hierarchical Indexing and Reshaping.

Stack


In [ ]:
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
   ....:                      'foo', 'foo', 'qux', 'qux'],
   ....:                     ['one', 'two', 'one', 'two',
   ....:                      'one', 'two', 'one', 'two']]))
   ....: 

index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])

df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
df2 = df[:4]
df2

The stack() method “compresses” a level in the DataFrame’s columns.


In [ ]:
stacked = df2.stack()
stacked

With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level:


In [ ]:
stacked.unstack()

In [ ]:
stacked.unstack(1)

In [ ]:
stacked.unstack(0)

Pivot Tables

See the section on Pivot Tables.

In [100]:


In [ ]:
df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3,
   .....:                    'B' : ['A', 'B', 'C'] * 4,
   .....:                    'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
   .....:                    'D' : np.random.randn(12),
   .....:                    'E' : np.random.randn(12)})
   .....: 

df

We can produce pivot tables from this data very easily:


In [ ]:
pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])

Time Series

pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications. See the Time Series section


In [ ]:
rng = pd.date_range('1/1/2012', periods=100, freq='S')
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
ts.resample('5Min').sum()

In [ ]:
Out[105]: 
2012-01-01    25083
Freq: 5T, dtype: int64

Time zone representation


In [ ]:
rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
ts = pd.Series(np.random.randn(len(rng)), rng)
ts

In [ ]:
ts_utc = ts.tz_localize('UTC')
ts_utc

Convert to another time zone


In [ ]:
ts_utc.tz_convert('US/Eastern')

Converting between time span representations


In [ ]:
rng = pd.date_range('1/1/2012', periods=5, freq='M')
ts = pd.Series(np.random.randn(len(rng)), index=rng)
ts

In [ ]:
ps = ts.to_period()
ps

In [ ]:
ps.to_timestamp()

Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:


In [ ]:
prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
ts = pd.Series(np.random.randn(len(prng)), prng)
ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
ts.head()

Categoricals

Since version 0.15, pandas can include categorical data in a DataFrame. For full docs, see the categorical introduction and the API documentation.


In [ ]:
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})

Convert the raw grades to a categorical data type.


In [ ]:
df["grade"] = df["raw_grade"].astype("category")
df["grade"]

In [ ]:
Out[124]: 
0    a
1    b
2    b
3    a
4    a
5    e
Name: grade, dtype: category
Categories (3, object): [a, b, e]

Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!)


In [ ]:
df["grade"].cat.categories = ["very good", "good", "very bad"]

Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series per default).


In [ ]:
df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
df["grade"]

Sorting is per order in the categories, not lexical order.


In [ ]:
df.sort_values(by="grade")

In [ ]:
Out[128]: 
   id raw_grade      grade
5   6         e   very bad
1   2         b       good
2   3         b       good
0   1         a  very good
3   4         a  very good
4   5         a  very good

Grouping by a categorical column shows also empty categories.


In [ ]:
df.groupby("grade").size()

Plotting

Plotting docs.


In [ ]:
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()

In [ ]:
%matplotlib inline
ts.plot()

On DataFrame, plot() is a convenience to plot all of the columns with labels:


In [ ]:
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
   .....:                   columns=['A', 'B', 'C', 'D'])
   .....: 

df = df.cumsum()

%matplotlib inline
plt.figure(); df.plot(); plt.legend(loc='best')

Getting Data In/Out

CSV

Writing to a csv file


In [ ]:
df.to_csv('foo.csv')

In [ ]:
pd.read_csv('foo.csv')

HDF5

Reading and writing to HDFStores

Writing to a HDF5 Store


In [ ]:
## df.to_hdf('foo.h5','df')

Reading from a HDF5 Store


In [ ]:
## pd.read_hdf('foo.h5','df')

Excel

Reading and writing to MS Excel

Writing to an excel file


In [ ]:
df.to_excel('foo.xlsx', sheet_name='Sheet1')

Reading from an excel file


In [ ]:
pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])

Gotchas

If you are trying an operation and you see an exception like:

    
>>> if pd.Series([False, True, False]):
    print("I was true")
Traceback
    ...

ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().

See Comparisons for an explanation and what to do.

See Gotchas as well.


In [ ]:


In [ ]:


In [ ]: