Last updated: June 29th 2016
Welcome to a demo of Python's data analysis package called Pandas
. Our goal is to learn about Data Analysis and transformation using Pandas while exploring datasets used to analyze climate change.
The global goal of this demo is to provide the tools to be able to try and reproduce some of the analysis done in the IPCC global climate reports published in the last decade (see for example https://www.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full.pdf).
We are first going to load a few public datasets containing information about global temperature, global and local sea level infomation, and global concentration of greenhouse gases like CO2, to see if there are correlations and how the trends are to evolve, assuming no fundamental change in the system. For all these datasets, we will download them, visualize them, clean them, search through them, merge them, resample them, transform them and summarize them.
In the process, we will learn about:
Part 1:
1. Loading data
2. Pandas datastructures
3. Cleaning and formatting data
4. Basic visualization
Part 2:
5. Accessing data
6. Working with dates and times
7. Transforming datasets
8. Statistical analysis
9. Data agregation and summarization
10. Correlations and regressions
11. Predictions from auto regression models
In [ ]:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option("display.max_rows", 16)
LARGE_FIGSIZE = (12, 8)
In [ ]:
# Change this cell to the demo location on YOUR machine
%cd ~/Projects/pandas_tutorial/climate_timeseries/
%ls
In [ ]:
with pd.HDFStore("all_data.h5") as store:
giss_temp = store["/temperatures/giss"]
full_globe_temp = store["/temperatures/full_globe"]
mean_sea_level = store["/sea_level/mean_sea_level"]
local_sea_level_stations = store["/sea_level/stations"]
The general philosophy for accessing values inside a Pandas datastructure is that, unlike a numpy array that only allows to index using integers a Series allows to index with the values inside the index. That makes the code more readable.
In [ ]:
full_globe_temp
In [ ]:
# By default [] on a series accesses values using the index, not the location in the series
In [ ]:
# This index is non-trivial though (will talk more about these datetime objects further down):
full_globe_temp.index.dtype
In [ ]:
first_date = full_globe_temp.index[0]
first_date == pd.to_datetime('1880')
In [ ]:
# By default [] on a series accesses values using the index, not the location in the series
print(full_globe_temp[pd.to_datetime('1880')])
# print(temp1[0]) # This would fail!!
In [ ]:
# Another more explicit way to do the same thing is to use loc
print(full_globe_temp.loc[pd.to_datetime('1990')])
print(full_globe_temp.iloc[0], full_globe_temp.iloc[-1])
In [ ]:
# Year of the last record?
full_globe_temp.index[-1]
In [ ]:
# New records can be added:
full_globe_temp[pd.to_datetime('2011')] = np.nan
In [ ]:
# In 2D, same idea, though in a DF [] accesses columns (Series)
giss_temp["Jan"]
In [ ]:
# while .loc and .iloc allow to access individual values, slices or masked selections:
print(giss_temp.loc[1979, "Dec"])
In [ ]:
# the loc operators support fancy indexing:
print(giss_temp.loc[1979, ["Nov", "Dec"])
In [ ]:
# Slicing can be done with .loc and .iloc
print(giss_temp.loc[1979, "Jan":"Jun"]) # Note that the end point is included unlike NumPy!!!
print(giss_temp.loc[1979, ::2])
In [ ]:
# Masking can also be used in one or more dimensions. For example, another way to grab every other month for the first year:
mask = [True, False] * 6
print(giss_temp.iloc[0, mask])
print(giss_temp.loc[1880, mask])
In [ ]:
# We could also add a new column like a new entry in a dictionary
giss_temp["totals"] = giss_temp.sum(axis=1)
giss_temp
In [ ]:
# Let's remove this new column, we will learn to do this differently
giss_temp = giss_temp.drop("totals", axis=1)
More complex queries rely on the same concepts. For example what are the names, and IDs of the sea level stations in the USA?
In [ ]:
local_sea_level_stations.columns
In [ ]:
american_stations = local_sea_level_stations["Country"] == "USA"
local_sea_level_stations.loc[american_stations, ["ID", "Station Name"]]
EXERCISE: Print all European countries that have sea level stations. We will for now define Europe as being a country that has a station within the 30-70 latitude and a longitude in -10 to 40. You will need to combine masks using the &
(and
) and/or the |
(or
) operators, just like in Numpy.
Bonus: print each country only once.
In [ ]:
giss_temp
In [ ]:
giss_temp['Jan']
In [ ]:
# This works right now, but this is dangerous:
giss_temp['Jan'][1880] = -33.9
In [ ]:
giss_temp
In [ ]:
# This is the safe way to do it:
giss_temp.loc[1880, 'Jan'] = -33.9
More details at http://pandas.pydata.org/pandas-docs/stable/timeseries.html
Let's work some more with full_globe_temp
's index since we saw it is special.
In [ ]:
# Its dtype is NumPy's new 'datetime64[ns]':
full_globe_temp.index.dtype
The advantage of having a real datetime index is that operations can be done efficiently on it. Let's add a flag to signal if the value is before or after the great depression's black Friday:
In [ ]:
black_friday = pd.to_datetime('1929-10-29')
full_globe_temp.index > black_friday
In [ ]:
# Convert its index from timestamp to period: it is more meaningfull since it was measured and averaged over the year...
full_globe_temp.index = full_globe_temp.index.to_period()
full_globe_temp
See also to_timestamp
to conver back to timestamps and its how
method to specify when inside the range to set the timestamp.
Another thing that can be done is to resample the series, downsample or upsample. Let's see the series converted to 10 year blocks or upscale to a monthly series:
In [ ]:
# Frequencies can be specified as strings: "us", "ms", "S", "T", "H", "D", "B", "W", "M", "A", "3min", "2h20", ...
# More aliases at http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases
full_globe_temp.resample("M").mean()
In [ ]:
full_globe_temp.resample("10A").mean()
The index for giss_temp
isn't an instance of datetimes so we may want to generate such DatetimeIndex
objects. This can be done with date_range
and period_range
:
In [ ]:
# Can specify a start date and a number of values desired. By default it will assume an interval of 1 day:
pd.date_range('1/1/1880', periods=4)
In [ ]:
# Can also specify a start and a stop date, as well as a frequency
pd.date_range('1/1/1880', '1/1/2016', freq="A")
Note that "A"
by default means the end of the year. Other times in the year can be specified with "AS"
(start), "A-JAN"
or "A-JUN"
. Even more options can be imported from pandas.tseries.offsets
:
In [ ]:
from pandas.tseries.offsets import YearBegin
pd.date_range('1/1/1880', '1/1/2015', freq=YearBegin())
Actually we will convert that dataset to a 1D dataset, and build a monthly index, so lets build a monthly period index
In [ ]:
giss_temp_index = pd.period_range('1/1/1880', '12/1/2015', freq="M")
giss_temp_index
Let's look at our local_sea_level_stations
dataset some more, to learn more about it and also do some formatting. What is the range of dates and lattitudes we have, the list of countries, the range of variations, ...
In [ ]:
# What about the range of dates?
local_sea_level_stations["Date"].min(), local_sea_level_stations["Date"].max(), local_sea_level_stations["Date"].iloc[-1]
In [ ]:
local_sea_level_stations.dtypes
We don't see the range of dates because the dates are of dtype "Object", (usually meaning strings). Let's convert that using apply
:
In [ ]:
local_sea_level_stations["Date"].apply(pd.to_datetime)
This apply
method is very powerful and general. We have used it to do something we could have done with astype
, but any custom function can be provided to apply
.
In [ ]:
local_sea_level_stations["Date"] = local_sea_level_stations["Date"].apply(pd.to_datetime)
# Now we can really compare the dates, and therefore get a real range:
print(local_sea_level_stations["Date"].min(), local_sea_level_stations["Date"].max())
EXERCISE: Use the apply
method to search through the stations names for a station in New York. What is the ID of the station?
In [ ]:
# Your code here
Now that we know the range of dates, to look at the data, sorting it following the dates is done with sort
:
In [ ]:
local_sea_level_stations.sort_values(by="Date")
Since many stations last updated on the same dates, it is logical to want to sort further, for example, by Country
at constant date:
In [ ]:
local_sea_level_stations.sort_values(by=["Date", "Country"], ascending=False)
Let's look at the GISS dataset differently. Instead of seeing the months along the axis 1, and the years along the axis 0, it would could be good to convert these into an outer and an inner axis along only 1 time dimension.
Stacking and unstacking allows to convert a dataframe into a series and vice-versa:
In [ ]:
giss_temp.unstack?
unstacked = giss_temp.unstack()
unstacked
In [ ]:
# Note the nature of the result:
type(unstacked)
The result is grouped in the wrong order since it sorts first the axis that was unstacked. Another transformation that would help us is transposing...
In [ ]:
giss_temp.transpose()
In [ ]:
giss_temp_series = giss_temp.transpose().unstack()
giss_temp_series.name = "Temp anomaly"
giss_temp_series
In [ ]:
# Note the nature of the resulting index:
giss_temp_series.index
In [ ]:
# It is an index made of 2 columns. Let's fix the fact that one of them doesn't have a name:
giss_temp_series.index = giss_temp_series.index.set_names(["year", "month"])
In [ ]:
# We can now access deviations by specifying the year and month:
giss_temp_series[1980, "Jan"]
But this new multi-index isn't very good, because is it not viewed as 1 date, just as a tuple of values:
In [ ]:
giss_temp_series.plot(figsize=LARGE_FIGSIZE)
To improve on this, let's reuse an index we generated above with date_range
:
In [ ]:
giss_temp_series.index = giss_temp_index
giss_temp_series.plot(figsize=LARGE_FIGSIZE)
Let's go back to the dataframe version of the GISS temperature dataset temporarily to analyze anomalies month per month. Like most functions on a dataframe, stats functions are computed column per column. They also ignore missing values:
In [ ]:
monthly_averages = giss_temp.mean()
monthly_averages
It is possible to apply stats functions across rows instead of columns using the axis
keyword (just like in NumPy).
In [ ]:
yearly_averages = giss_temp.mean(axis=1)
yearly_averages
describe
provides many descriptive stats computed at once:
In [ ]:
mean_sea_level.describe()
Let's remove high frequency signal and extract the trend:
In [ ]:
full_globe_temp.plot()
rolled_series = full_globe_temp.rolling(window=10, center=False)
print(rolled_series)
rolled_series.mean().plot(figsize=LARGE_FIGSIZE)
In [ ]:
# To see what all can be done while rolling,
#pd.rolling_<TAB>
Let's look at our local_sea_level_stations
dataset some more:
In [ ]:
local_sea_level_stations.describe()
.describe()
only displays information about continuous Series
. What about categorical ones?
In [ ]:
local_sea_level_stations.columns
In [ ]:
local_sea_level_stations["Country"]
In [ ]:
local_sea_level_stations["Country"].describe()
In [ ]:
# List of unique values:
local_sea_level_stations["Country"].unique()
In [ ]:
local_sea_level_stations["Country"].value_counts()
In [ ]:
# To save memory, we can convert it to a categorical column:
local_sea_level_stations["Country"] = local_sea_level_stations["Country"].astype("category")
We can also create categorical series from continuous ones with the cut function:
In [ ]:
categorized = pd.cut(full_globe_temp, 3, labels=["L", "M", "H"])
categorized
In [ ]:
# The advantage is that we can use labels and control the order they should be treated in (L < M < H)
categorized.cat.categories
QUIZ: How much memory did we save? What if it was categorized but with dtype object
instead of category
?
Now that we have a good grasp on our datasets, Let's transform and analyze them some more to prepare them to compare them. The 2 function(alities)s to learn about here are groupby
and pivot_table
.
Let's explore the sea levels, first splitting into calendar years to compute average sea levels for each year:
In [ ]:
mean_sea_level
In [ ]:
mean_sea_level = mean_sea_level.reset_index()
mean_sea_level
In [ ]:
# Groupby with pandas can be done on a column or by applying a custom function to the index.
# If we want to group the data by year, we can build a year column into the DF:
mean_sea_level["year"] = mean_sea_level["date"].apply(int)
mean_sea_level
In [ ]:
sl_grouped_year = mean_sea_level.groupby("year")
What kind of object did we create?
In [ ]:
type(sl_grouped_year)
What to do with that strange GroupBy object? We can first loop over it to get the labels and the sub-dataframes for each group:
In [ ]:
for group_name, subdf in sl_grouped_year:
print(group_name)
print(subdf)
print("")
We could have done the same with less effort by grouping by the result of a custom function applied to the index. Let's reset the dataframe:
In [ ]:
mean_sea_level = mean_sea_level.drop(["year"], axis=1).set_index("date")
So that we can do the groupby on the index:
In [ ]:
sl_grouped_year = mean_sea_level.groupby(int)
Something else that can be done with such an object is to look at its groups
attribute to see the labels mapped to the rows involved:
In [ ]:
sl_grouped_year.groups
How to aggregate the results of this grouping depends on what we want to see: do we want to see averaged over the years? That is so common that it has been implemented directly as a method on the GroupBy
object.
In [ ]:
sl_grouped_year.mean()
In [ ]:
# We can apply any other reduction function or even a dict of functions using aggregate:
sl_grouped_year.aggregate({"mean_global": np.std})
Another possibility is to transform each group separately, rather than aggregate. For example, here we group over decades and subtract to each value, the average over that decade:
In [ ]:
sl_grouped_decade = mean_sea_level.groupby(lambda x: int(x/10.))
sl_grouped_decade.groups.keys()
In [ ]:
sl_grouped_decade.transform(lambda subframe: (subframe - subframe.mean()/subframe.std()))
Pivot table also allows to summarize the information, allowing to convert repeating columns into axes. For example, let's say that we would like to know how many sea level stations are in various european countries. And we would like to group the answers into 2 categories: the stations that have been updated recently (after 2000) and the others.
Let's first extract only entries located (roughly) in Europe.
In [ ]:
european_filter = ((local_sea_level_stations["Lat"] > 30) &
(local_sea_level_stations["Lat"] < 70) &
(local_sea_level_stations["Lon"] > -10) &
(local_sea_level_stations["Lon"] < 40)
)
# Let's make a copy to work with a new, clean block of memory
# (if you are interested, try and remove the copy to see the consequences further down...)
european_stations = local_sea_level_stations[european_filter].copy()
european_stations["Country"].unique()
The columns of our future table should have 2 values, whether the station was updated recently or not. Let's build a column to store that information:
In [ ]:
european_stations["Recently updated"] = european_stations["Date"] > pd.to_datetime("2000")
Finally, what value will be displayed inside the table. The values should be extracted from a column, pivot_table allowing an aggregation function to be applied when more than 1 value is found for a given case. Each station should count for 1, and we could aggregate multiple stations by summing these ones:
In [ ]:
european_stations["Number of stations"] = np.ones(len(european_stations))
In [ ]:
european_stations.sort_values(by="Country")
In [ ]:
station_counts = pd.pivot_table(european_stations, index="Country", columns="Recently updated",
values="Number of stations", aggfunc=np.sum)
# Let's remove from the table the countries for which no station was found:
station_counts.dropna(how="all")
QUIZ: Why is there still some countries with no entries?
EXERCISE: How many recently updated stations? Not recently updated stations? Which country has the most recently updated stations?
Bonus: Which country has the most stations?
In [ ]:
# Your code here
EXERCISE: How would we build the same dataframe with a groupby
operation?
In [ ]:
# Your code here
Both Series and dataframes have a corr
method to compute the correlation coefficient between series:
In [ ]:
# Let's see what how the various sea levels are correlated with each other:
mean_sea_level["northern_hem"].corr(mean_sea_level["southern_hem"])
In [ ]:
# If series are already grouped into a DataFrame, computing all correlation coeff is trivial:
mean_sea_level.corr()
Note: by default, the method used is the Pearson
correlation coefficient (https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient). Other methods are available (kendall
, spearman
using the method
kwarg).
In [ ]:
# Visualize the correlation matrix
plt.imshow(mean_sea_level.corr(), interpolation="nearest")
In [ ]:
plt.yticks?
In [ ]:
# let's make it a little better to confirm that learning about global sea level cannot be done from just
# looking at stations in the northern hemisphere:
plt.imshow(mean_sea_level.corr(), interpolation="nearest")
plt.xticks(np.arange(3), mean_sea_level.corr().columns)
plt.yticks(np.arange(3), mean_sea_level.corr().index)
plt.colorbar()
The recommeded way to build ordinaty least square regressions is by using statsmodels
.
In [ ]:
import statsmodels.formula.api as sm
In [ ]:
sm_model = sm.ols(formula="mean_global ~ northern_hem + southern_hem", data=mean_sea_level).fit()
In [ ]:
sm_model.params
In [ ]:
type(sm_model.params)
In [ ]:
sm_model.summary()
In [ ]:
plt.figure(figsize=LARGE_FIGSIZE)
mean_sea_level["mean_global"].plot()
sm_model.fittedvalues.plot(label="OLS prediction")
plt.legend(loc="upper left")
Now, we would like to look for correlations between our monthly temperatures and the sea levels we have. For this to be possible, some data alignment must be done since the time scales are very different for the 2 datasets.
In [ ]:
mean_sea_level["mean_global"].index
In [ ]:
giss_temp_series.index
In [ ]:
DAYS_PER_YEAR = {}
In [ ]:
import calendar
# Let's first convert the floating point dates in the sea level to timestamps:
def floating_year_to_timestamp(float_date):
""" Convert a date as a floating point year number to a pandas timestamp object.
"""
year = int(float_date)
days_per_year = 366 if calendar.isleap(year) else 365
remainder = float_date - year
daynum = 1 + remainder * (days_per_year - 1)
daynum = int(round(daynum))
# Convert day number to month and day
day = daynum
month = 1
while month < 13:
month_days = calendar.monthrange(year, month)[1]
if day <= month_days:
return pd.Timestamp(str(year)+"/"+str(month)+"/"+str(day))
day -= month_days
month += 1
raise ValueError('{} does not have {} days'.format(year, daynum))
In [ ]:
floating_year_to_timestamp(1996.0), floating_year_to_timestamp(1996.5), floating_year_to_timestamp(1996.9999)
In [ ]:
dt_index = pd.Series(mean_sea_level["mean_global"].index).apply(floating_year_to_timestamp)
dt_index
In [ ]:
mean_sea_level = mean_sea_level.reset_index(drop=True)
mean_sea_level.index = dt_index
mean_sea_level
Now, how to align the 2 series? Is this one sampled regularly so that the month temperatures can be upscaled to that frequency?
What is the frequency of that new index?
In [ ]:
dt_index.dtype
In [ ]:
# What is the frequency of the new index? The numpy way to compute differences between all values doesn't work:
dt_index[1:] - dt_index[:-1]
IMPORTANT Note: The above failure is due to the fact that operations between series automatically align them based on their index.
In [ ]:
# There is a method for shifting values up/down the index:
dt_index.shift()
In [ ]:
# So the distances can be computed with
dt_index - dt_index.shift()
In [ ]:
# Not constant reads apparently. Let's downscale the frequency of the sea levels
# to monthly, like the temperature reads we have:
monthly_mean_sea_level = mean_sea_level.resample("MS").mean().to_period()
monthly_mean_sea_level
In [ ]:
monthly_mean_sea_level["mean_global"].align(giss_temp_series)
In [ ]:
giss_temp_series.align?
In [ ]:
# Now that the series are using the same type and frequency of indexes, to align them is trivial:
monthly_mean_sea_level["mean_global"].align(giss_temp_series, join='inner')
In [ ]:
aligned_sl, aligned_temp = monthly_mean_sea_level["mean_global"].align(giss_temp_series, join='inner')
aligned_df = pd.DataFrame({"mean_sea_level": aligned_sl, "mean_global_temp": aligned_temp})
The alignment can even be done on an entire dataframe:
In [ ]:
monthly_mean_sea_level.align(giss_temp_series, axis=0, join='inner')
In [ ]:
aligned_sea_levels, aligned_temp = monthly_mean_sea_level.align(giss_temp_series, axis=0, join='inner')
aligned_monthly_data = aligned_sea_levels.copy()
aligned_monthly_data["global_temp"] = aligned_temp
aligned_monthly_data
In [ ]:
aligned_monthly_data.plot(figsize=LARGE_FIGSIZE)
In [ ]:
aligned_monthly_data.corr()
In [ ]:
model = sm.ols("southern_hem ~ global_temp", data=aligned_monthly_data)
params = model.fit()
params.rsquared
What if we had done the analysis yearly instead of monthly to remove seasonal variations?
In [ ]:
aligned_yearly_data = aligned_monthly_data.resample("A").mean()
aligned_yearly_data.plot()
In [ ]:
aligned_yearly_data.corr()
In [ ]:
model = sm.ols("southern_hem ~ global_temp", data=aligned_yearly_data).fit()
model.rsquared
An auto-regresssive model fits existing data and build a (potentially predictive) model of the data fitted. We use the timeseries analysis (tsa
) submodule of statsmodels
to make out-of-sample predictions for the upcoming decades:
In [ ]:
from statsmodels.tsa.api import AR
In [ ]:
import statsmodels as sm
# Let's remove seasonal variations by resampling annually
data = giss_temp_series.resample("A").mean().to_timestamp()
ar_model = AR(data, freq='A')
ar_res = ar_model.fit(maxlag=60, disp=True)
In [ ]:
plt.figure(figsize=LARGE_FIGSIZE)
pred = ar_res.predict(start='1950-1-1', end='2070')
data.plot(style='k', label="Historical Data")
pred.plot(style='r', label="Predicted Data")
plt.ylabel("Temperature variation (0.01 degC)")
plt.legend()
EXERCISE: Make another auto-regression on the sea level of the Atlantic ocean to estimate how much New York is going to flood in the coming century.
data/sea_levels/sl_Atlantic_Ocean.txt
. local_sea_level_stations
dataset, and use it to download timeseries in NY (URL would be http://www.psmsl.org/data/obtaining/met.monthly.data/< ID >.metdata).
In [ ]:
# Your code here