pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward this goal.
This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook The original document is posted at http://pandas.pydata.org/pandas-docs/stable/10min.html
Customarily, we import as follows:
In [1]:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
See the Data Structure Intro section.
When we worked with Numpy arrays, one limitation was that a numpy ndarray can only hold one type of data (there are ways of getting around that, but they take away some of the nice advantages of numpy!). More often than not, real-world data sets contain mixed data that can have strings, dates, and numerical values for each "element". That is where pandas data frames show their real advantage.
Pandas has Series (1-dim labeled lists) and DataTables (2-dim tables with column names and row labels).
We'll start off by looking as Series.
Creating a Series
by passing a list of values, letting pandas create a default integer index:
In [4]:
s = pd.Series([1,3,5,np.nan,6,8], index=[x**2 for x in range(6)])
s
Out[4]:
Creating a DataFrame
by passing a numpy array, with a datetime index and labeled columns:
In [5]:
dates = pd.date_range('20130101', periods=6, freq='M')
dates
Out[5]:
In [6]:
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns = ['Ann', "Bob", "Charly", "Don"])
## columns=list('ABCD'))
df
Out[6]:
You can operate on the columns:
In [7]:
df.Charly+df.Don
Out[7]:
This example illustrates some objects that can be put into DataFrames (with dtype specification):
In [8]:
df2 = pd.DataFrame({ 'A' : 1.,
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1,index=list(['a','b','c','d']),dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]),
'F' : 'foo' })
df2
Out[8]:
In [9]:
df2.dtypes
Out[9]:
Another simple example:
In [10]:
df3 = pd.DataFrame(np.random.rand(3,4), columns=['ValuesA','ValuesB','C','D'], index=[x**2 for x in xrange(3)])
df3['CplusD'] = df3.C+df3.D
df3
Out[10]:
If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled. Here’s a subset of the attributes that will be completed:
In [13]: df2.<TAB> df2.A df2.boxplot df2.abs df2.C df2.add df2.clip df2.add_prefix df2.clip_lower df2.add_suffix df2.clip_upper df2.align df2.columns df2.all df2.combine df2.any df2.combineAdd df2.append df2.combine_first df2.apply df2.combineMult df2.applymap df2.compound df2.as_blocks df2.consolidate df2.asfreq df2.convert_objects df2.as_matrix df2.copy df2.astype df2.corr df2.at df2.corrwith df2.at_time df2.count df2.axes df2.cov df2.B df2.cummax df2.between_time df2.cummin df2.bfill df2.cumprod df2.blocks df2.cumsum df2.bool df2.DAs you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes have been truncated for brevity.
In [12]:
df.head(3)
Out[12]:
In [13]:
df.tail(3)
Out[13]:
In [14]:
df.index
Out[14]:
In [15]:
df.columns
Out[15]:
In [16]:
df.values
Out[16]:
In [17]:
df.describe()
Out[17]:
In [18]:
df2.columns
Out[18]:
In [19]:
df2.dtypes
Out[19]:
In [20]:
df2.describe()
Out[20]:
Transposing your data
In [21]:
df.T
Out[21]:
Sorting by an axis
In [23]:
df.sort_index(axis=0, ascending=False) # axis = 0 => sort rows, 1 => sort columns
Out[23]:
Sorting by values
In [27]:
df.sort_index(axis=0).sort_values(by=['Bob','Ann'], ascending=True)
Out[27]:
Note: While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized pandas data access methods, .at
, .iat
, .loc
, .iloc
and .ix
.
See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing.
Basically: loc
is for label-based indexing; iloc
is for indexing by row number; ix
first tries to use labels, and if it fails, it goes to integer row-number indexing. These methods can all be used to select several elements at once. By contrast, at
provides fast label-based indexing for a single element, and iat
provides fast row-number indexing for a single element.
Selecting a single column, which yields a Series
, equivalent to df.Ann
In [29]:
df.Ann
Out[29]:
In [28]:
df['Ann']
Out[28]:
Selecting via [], which slices the rows.
In [30]:
df[1:3]
Out[30]:
In [33]:
df['20130228':'20130331']
Out[33]:
In [34]:
dates
Out[34]:
In [35]:
dates[0]
Out[35]:
In [36]:
df
Out[36]:
In [38]:
df.loc[dates[0:3]]
Out[38]:
Selecting on a multi-axis by label
In [39]:
df.loc[:,['Ann','Bob']]
Out[39]:
Showing label slicing, both endpoints are included
In [40]:
df.loc['20130131':'20130430',['Ann','Bob']]
Out[40]:
Reduction in the dimensions of the returned object
In [43]:
type(df.loc['20130131',['Ann','Bob']])
Out[43]:
For getting a scalar value
In [44]:
%time df.loc[dates[0],'Ann']
Out[44]:
For getting fast access to a scalar (equiv to the prior method)
In [45]:
%time df.at[dates[0],'Ann']
Out[45]:
See more in Selection by Position
Select via the position of the passed integers
In [46]:
df
Out[46]:
In [47]:
df.iloc[3]
Out[47]:
By integer slices, acting similar to numpy/python
In [48]:
df.iloc[3:5,0:2]
Out[48]:
By lists of integer position locations, similar to the numpy/python style
In [49]:
df.iloc[[1,2,4],[0,2]]
Out[49]:
For slicing rows explicitly
In [50]:
df.iloc[1:3,:]
Out[50]:
For slicing columns explicitly
In [51]:
df.iloc[:,1:3]
Out[51]:
For getting a value explicitly
In [52]:
%time print df.iloc[1,1]
#For getting fast access to a scalar (equiv to the prior method)
%time print df.iat[1,1]
In [ ]:
# in-class exercise:
# Show all values of "Bob" and "Don" for March-May 2013.
In [58]:
df
Out[58]:
In [60]:
dates[np.logical_and(dates>='20130301', dates<'20130601')]
Out[60]:
In [67]:
df.ix[dates[np.logical_and(dates>='20130301', dates<'20130601')], ['Bob','Don']]
Out[67]:
In [69]:
df.Ann
Out[69]:
In [70]:
flt = (df.Ann >= 0) & (df.Ann < 1.5)
flt
Out[70]:
In [71]:
df[flt]
Out[71]:
In [72]:
df[(df.Ann >= 0) & (df.Ann < 1.5)]
Out[72]:
A where
operation for getting.
In [75]:
df5=df[df > 0]
df5
Out[75]:
In [76]:
df5.fillna(99999)
Out[76]:
Using the isin()
method for filtering:
In [77]:
df2 = df.copy()
df2['E'] = ['one', 'one','two','three','four','three']
df2
Out[77]:
In [78]:
df
Out[78]:
In [79]:
df2[df2['E'].isin(['two','three'])]
Out[79]:
In [ ]:
# in-class exercise:
# Display all rows of df where Ann > Charly.
In [80]:
df[df.Ann>df.Charly]
Out[80]:
In [81]:
s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102', periods=6))
s1
Out[81]:
In [82]:
df['F'] = s1
df['G'] = df['Ann']-df['Bob']
df
Out[82]:
In [ ]:
# in-class exercise: how would you change the index in s1 so that we get values for F in df?
#(note: that's probably not a good way to manipulate real data!)
In [84]:
s1.index = df.index
In [85]:
df['F'] = s1
df
Out[85]:
Setting values by label
In [86]:
df.at[dates[0],'Ann'] = 17.6
df
Out[86]:
Setting values by position
In [87]:
df.iat[5,2] = 349
df
Out[87]:
Setting by assigning with a numpy array
In [88]:
df.loc[:,'Fives'] = np.array([5] * len(df))
df
Out[88]:
A where
operation with setting.
In [89]:
df2 = df.copy()
df2[df2 < 0] = -df2
df2
Out[89]:
pandas primarily uses the value np.nan
to represent missing data. It is by default not included in computations.
See the Missing Data section
Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.
In [95]:
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1
Out[95]:
In [96]:
df1.loc[dates[0]:dates[1],'E'] = 1
df1
Out[96]:
To drop any rows that have missing data.
In [97]:
df1.dropna(how='any')
Out[97]:
Filling missing data
In [98]:
df1
Out[98]:
In [99]:
df1.fillna(value=5)
Out[99]:
To get the boolean mask where values are nan
In [100]:
pd.isnull(df1)
Out[100]:
In [108]:
df.set_value(dates[:3], 'Ann', [0,1,2])
Out[108]:
See the Basic section on Binary Ops
Operations in general exclude missing data.
Performing a descriptive statistic
In [110]:
df.mean(0)
Out[110]:
Same operation on the other axis
In [111]:
df.mean(1)
Out[111]:
Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.
In [109]:
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2)
s
Out[109]:
In [ ]:
df.sub(s, axis='index')
In [ ]:
df
In [112]:
df.apply(np.cumsum)
Out[112]:
In [113]:
df
Out[113]:
In [114]:
df.max() ##df.apply(max)
Out[114]:
In [115]:
df.Ann.max()-df.Ann.min()
Out[115]:
In [116]:
df.apply(lambda x: x.max() - x.min())
Out[116]:
See more at Histogramming and Discretization
In [ ]:
s = pd.Series(np.random.randint(0, 7, size=10))
s
In [ ]:
s.value_counts()
Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.
In [ ]:
s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s.str.lower()
pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
See the Merging section
Concatenating pandas objects together with concat()
:
In [ ]:
df = pd.DataFrame(np.random.randn(10, 4))
df
In [ ]:
# break it into pieces
pieces = [df[:3], df[3:7], df[7:]]
print "pieces:\n", pieces
print "put back together:\n"
pd.concat(pieces)
SQL style merges. See the Database style joining
In [117]:
left = pd.DataFrame({'key': ['foo', 'boo', 'foo'], 'lval': [1, 2, 3]})
#right = pd.DataFrame({'key': ['boo', 'foo', 'foo'], 'rval': [4, 5, 6]})
right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [5, 6]})
In [118]:
left
Out[118]:
In [119]:
right
Out[119]:
In [121]:
pd.merge(left, right, on='key', how='right')
Out[121]:
Append rows to a dataframe. See the Appending
In [ ]:
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df
In [ ]:
s = df.iloc[3]
df.append(s, ignore_index=True)
By “group by” we are referring to a process involving one or more of the following steps
See the Grouping section
In [122]:
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'],
'C' : np.random.randn(8),
'D' : np.random.randn(8)})
df
Out[122]:
Grouping and then applying a function sum to the resulting groups.
In [124]:
df.groupby('A').mean()
Out[124]:
Grouping by multiple columns forms a hierarchical index, which we then apply the function.
In [125]:
df.groupby(['A','B']).sum()
Out[125]:
In [ ]:
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
index
In [ ]:
df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
df
In [ ]:
df2 = df[:4]
df2
The stack()
method “compresses” a level in the DataFrame’s columns.
In [ ]:
stacked = df2.stack()
stacked
With a “stacked” DataFrame or Series (having a MultiIndex
as the index), the inverse operation of stack()
is unstack()
, which by default unstacks the last level:
In [ ]:
stacked.unstack()
In [ ]:
stacked.unstack(1)
In [ ]:
stacked.unstack(0)
In [ ]:
df = pd.DataFrame({'ModelNumber' : ['one', 'one', 'two', 'three'] * 3,
'Submodel' : ['A', 'B', 'C'] * 4,
'Type' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
'Xval' : np.random.randn(12),
'Yval' : np.random.randn(12)})
df
We can produce pivot tables from this data very easily:
In [ ]:
pd.pivot_table(df, values='Xval', index=['ModelNumber', 'Submodel'], columns=['Type'])
pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications. See the Time Series section
In [ ]:
rng = pd.date_range('1/1/2012', periods=100, freq='S')
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
ts.resample('5Min').sum()
In [ ]:
Out[105]:
2012-01-01 25083
Freq: 5T, dtype: int64
Time zone representation
In [ ]:
rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
ts = pd.Series(np.random.randn(len(rng)), rng)
ts
In [ ]:
ts_utc = ts.tz_localize('UTC')
ts_utc
Convert to another time zone
In [ ]:
ts_utc.tz_convert('US/Eastern')
Converting between time span representations
In [ ]:
rng = pd.date_range('1/1/2012', periods=5, freq='M')
ts = pd.Series(np.random.randn(len(rng)), index=rng)
ts
In [ ]:
ps = ts.to_period()
ps
In [ ]:
ps.to_timestamp()
Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:
In [ ]:
prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
ts = pd.Series(np.random.randn(len(prng)), prng)
ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
ts.head()
Since version 0.15, pandas can include categorical data in a DataFrame. For full docs, see the categorical introduction and the API documentation.
In [ ]:
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
Convert the raw grades to a categorical data type.
In [ ]:
df["grade"] = df["raw_grade"].astype("category")
df["grade"]
In [ ]:
Out[124]:
0 a
1 b
2 b
3 a
4 a
5 e
Name: grade, dtype: category
Categories (3, object): [a, b, e]
Rename the categories to more meaningful names (assigning to Series.cat.categories
is inplace!)
In [ ]:
df["grade"].cat.categories = ["very good", "good", "very bad"]
Reorder the categories and simultaneously add the missing categories (methods under Series .cat
return a new Series per default).
In [ ]:
df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
df["grade"]
Sorting is per order in the categories, not lexical order.
In [ ]:
df.sort_values(by="grade")
In [ ]:
Out[128]:
id raw_grade grade
5 6 e very bad
1 2 b good
2 3 b good
0 1 a very good
3 4 a very good
4 5 a very good
Grouping by a categorical column shows also empty categories.
In [ ]:
df.groupby("grade").size()
In [ ]:
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
In [ ]:
%matplotlib inline
ts.plot()
On DataFrame, plot()
is a convenience to plot all of the columns with labels:
In [ ]:
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
.....: columns=['A', 'B', 'C', 'D'])
.....:
df = df.cumsum()
%matplotlib inline
plt.figure(); df.plot(); plt.legend(loc='best')
In [ ]:
df.to_csv('foo.csv')
In [ ]:
pd.read_csv('foo.csv')
In [ ]:
## df.to_hdf('foo.h5','df')
Reading from a HDF5 Store
In [ ]:
## pd.read_hdf('foo.h5','df')
In [ ]:
df.to_excel('foo.xlsx', sheet_name='Sheet1')
Reading from an excel file
In [ ]:
pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
If you are trying an operation and you see an exception like:
>>> if pd.Series([False, True, False]): print("I was true") Traceback ... ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().
See Comparisons for an explanation and what to do.
See Gotchas as well.
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
np.array([row[10] for row in array])