SC57 - Working with big, multi-dimensional geoscientific datasets in Python: a tutorial introduction to xarray

Original notebook by Stephan Hoyer, Rossbypalooza, 2016.
Modified by Edward Byers, Matthew Gidden and Fabien Maussion for EGU General Assembly 2017, Vienna, Austria

Thursday, 27th April, 15:30–17:00 / Room -2.91

Convenors

  • Dr Edward Byers - International Institute for Applied Systems Analysis, Laxenburg, Austria
  • Dr Matthew Gidden - International Institute for Applied Systems Analysis, Laxenburg, Austria
  • Dr Fabien Maussion - University of Innsbruck, Innsbruck, Austria

With

you can reach

Structure of this tutorial

  1. Introduction to key features of xarray
  2. Basic operations in xarray: opening, inspecting, selecting and indexing data
  3. Selecting data with named dimensions
  4. Operations and computation
  5. Groupby and "split-apply-combine"
  6. Graphics
  7. Out-of-core computation

1. Key features of xarray

What is xarray?

  • xarray is an open source project and Python package
  • xarray has been designed to perform labelled data analysis on multi-dimensional arrays
  • the xarray approach adopts the Common Data Model for self-describing scientific data in widespread use in the Earth sciences
  • xarray.Dataset is an in-memory representation of a netCDF file.
  • xarray is built on top of the dataprocessing library Pandas (the best way to work with tabular data (e.g., CSV files) in Python)

Our data

  • numeric
  • multi-dimensional
  • labelled
  • (lots of) metadata
  • sometimes (very) large

What is xarray good for?

  • Gridded, multi-dimensional and large datasets, commonly used in earth sciences, but also increasingly finance, engineering (signal/image processing), and biological sciences
  • Integration with other data analysis packages such as Pandas
  • I/O operations (NetCDF)
  • Plotting
  • Out of core computation and parallel processing
  • Extensions based on xarray
  • ...

Where can I find more info?

For more information about xarray

For more doing data analysis with Python:

Packages building on xarray for the geophysical sciences

For analyzing GCM output:

Other tools:

Resources for teaching and learning xarray in geosciences:

2. Basic operations in xarray


Import python packages


In [ ]:
# standard imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import xarray as xr

import warnings

%matplotlib inline

np.set_printoptions(precision=3, linewidth=80, edgeitems=1)  # make numpy less verbose
xr.set_options(display_width=70)
warnings.simplefilter('ignore') # filter some warning messages

Basic data arrays in numpy


In [ ]:
import numpy as np
a = np.array([[1, 3, 9], [2, 8, 4]])
a

In [ ]:
a[1, 2]

In [ ]:
a.mean(axis=0)

numpy is a powerful but "low-level" array manipulation tool. Axis only have numbers and no names (it is easy to forget which axis is what, a common source of trivial bugs), arrays can't carry metadata (e.g. units), and the data is unstructured (i.e. the coordinates and/or other related arrays have to be handled separately: another source of bugs).

This is where xarray comes in!

Properties of xarray.Dataset and xarray.DataArray objects

We'll start with the "air_temperature" tutorial dataset. This tutorial comes with the xarray package. Other examples here.


In [ ]:
ds = xr.tutorial.load_dataset('air_temperature')

In [ ]:
ds

In [ ]:
ds.air

In [ ]:
ds.dims

In [ ]:
ds.attrs

In [ ]:
ds.air.values

In [ ]:
type(ds.air.values)

In [ ]:
ds.air.dims

In [ ]:
ds.air.attrs

In [ ]:
ds.air.attrs['tutorial-date'] = 27042017

In [ ]:
ds.air.attrs

Let's Do Some Math


In [ ]:
kelvin = ds.air.mean(dim='time')
kelvin.plot();

In [ ]:
centigrade = kelvin - 273.16
centigrade.plot();

Notice xarray has changed the colormap according to the dataset (borrowing logic from Seaborn).

  • With degrees C, the data passes through 0, so a diverging colormap is used
  • With Kelvin, the default colormap is used.

In [ ]:
# ufuncs work too
np.sin(centigrade).plot();

Adding Data to DataSets


In [ ]:
ds

Let's add those kelvin and centigrade dataArrays to the dataset.


In [ ]:
ds['centigrade'] = centigrade
ds['kelvin'] = kelvin
ds

In [ ]:
ds.kelvin.attrs  # attrs are empty! Let's add some

In [ ]:
ds.kelvin.attrs['Description'] = 'Mean air tempterature (through time) in kelvin.'

In [ ]:
ds.kelvin

In [ ]:
ds.to_netcdf('new file.nc')

3. Selecting data with named dimensions

In xarray there are many different ways for selecting and indexing data.

Positional indexing (old way)

This is the "old way", i.e. like numpy:


In [ ]:
ds.air[:, 1, 2]  # note that the attributes, coordinates are preserved

In [ ]:
ds.air[:, 1, 2].plot();

This selection implies prior knowledge about the structure of the data, and is therefore much less readable than the "xarray methods" presented below.

Selection by index

Selection based on the index of a coordinate:


In [ ]:
ds.air.isel(time=0).plot();  # like above, but with a dimension name this time

Selection by value

Selection based on the value of a coordinate:


In [ ]:
ds.air.sel(lat=72.5, lon=205).plot();

Selection by value works well for time, too


In [ ]:
ds.air.sel(time='2013-01-02').plot(); # Note that we will extract 4 time steps! 3d data is plotted as histogram

In [ ]:
ds.air.sel(time='2013-01-02T06:00').plot();  # or look at a single timestep

Selecting a range of values

The syntax is similar, but you'll need to use a slice:


In [ ]:
ds.air.sel(lat=slice(60, 50), lon=slice(200, 270), time='2013-01-02T06:00:00').plot();

Nearest neighbor lookup


In [ ]:
ds.air.sel(lat=41.8781, lon=360-87.6298, method='nearest', tolerance=5).plot();

4. Operations and computation

  • We can do arithmetic directly on Dataset and DataArray objects.
  • Labels are preserved and dataArray dimensions automatically aligned.

Broadcasting


In [ ]:
a = xr.DataArray(np.arange(3), dims='time', 
                 coords={'time':np.arange(3)})
b = xr.DataArray(np.arange(4), dims='space', 
                 coords={'space':np.arange(4)})
a + b

Alignment


In [ ]:
atime = np.arange(3)
btime = np.arange(5) + 1
atime, btime

In [ ]:
a = xr.DataArray(np.arange(3), dims='time', 
                 coords={'time':atime})
b = xr.DataArray(np.arange(5), dims='time', 
                 coords={'time':btime})
a + b

Aggregation


In [ ]:
ds.max()

In [ ]:
ds.air.median(dim=['lat', 'lon']).plot();

Masking with .where()


In [ ]:
means = ds.air.mean(dim=['time'])
means.where(means > 273.15).plot();

5. Groupby and "split-apply-combine"

Xarray implements the "split-apply-combine" paradigm with groupby. This works really well for calculating climatologies:


In [ ]:
ds.air.groupby('time.season').mean()


In [ ]:
ds.air.groupby('time.month').mean('time')

In [ ]:
clim = ds.air.groupby('time.month').mean('time')

You can also do arithmetic with groupby objects, which repeats the arithmetic over each group:


In [ ]:
anomalies = ds.air.groupby('time.month') - clim

In [ ]:
anomalies

In [ ]:
anomalies.plot();

In [ ]:
anomalies.sel(time= '2013-02').plot();  # Find all the anomolous values for February

Resample adjusts a time series to a new resolution:


In [ ]:
tmin = ds.air.resample('1D', dim='time', how='min')  # Resample to one day '1D
tmax = ds.air.resample('1D', dim='time', how='max')

In [ ]:
(tmin.sel(time='2013-02-15') - 273.15).plot();

In [ ]:
ds_extremes = xr.Dataset({'tmin': tmin, 'tmax': tmax})

In [ ]:
ds_extremes

6. Graphics

xarray plotting functions rely on matplotlib internally, but they make use of all available metadata to make the plotting operations more intuitive and interpretable.

1D plots


In [ ]:
zonal_t_average = ds.air.mean(dim=['lon', 'time']) - 273.15
zonal_t_average.plot();  # 1D arrays are plotted as line plots

2D plots


In [ ]:
t_average = ds.air.mean(dim='time') - 273.15
t_average.plot();  # 2D arrays are plotted with pcolormesh

In [ ]:
t_average.plot.contourf();  # but you can use contour(), contourf() or imshow() if you wish

Customizing 2d plots


In [ ]:
t_average.plot.contourf(cmap='BrBG_r', vmin=-15, vmax=15);

In [ ]:
t_average.plot.contourf(cmap='BrBG_r', levels=22, center=False);

Dealing with Outliers


In [ ]:
air_outliers = ds.air.isel(time=0).copy()
air_outliers[0, 0] = 100
air_outliers[-1, -1] = 400
air_outliers.plot();  # outliers mess with the datarange and colorscale!

In [ ]:
# Using `robust=True` uses the 2nd and 98th percentiles of the data to compute the color limits.
air_outliers.plot(robust=True);

Facet plots


In [ ]:
t_season = ds.air.groupby('time.season').mean(dim='time') - 273.15

In [ ]:
# facet plot allows to do multiplot with the same color mappings
t_season.plot.contourf(x='lon', y='lat', col='season', col_wrap=2, levels=22);

Plotting on maps

For plotting on maps, we rely on the excellent cartopy library.


In [ ]:
import cartopy.crs as ccrs

In [ ]:
f = plt.figure(figsize=(8, 4))
# Define the map projection *on which* you want to plot
ax = plt.axes(projection=ccrs.Orthographic(-80, 35))
# ax is an empty plot. We now plot the variable t_average onto ax
# the keyword "transform" tells the function in which projection the air temp data is stored 
t_average.plot(ax=ax, transform=ccrs.PlateCarree())
# Add gridlines and coastlines to the plot
ax.coastlines(); ax.gridlines();

Facet plots on maps


In [ ]:
# this time we need to retrieve the plots to do things with the axes later on
p = t_season.plot(x='lon', y='lat', col='season', transform=ccrs.PlateCarree(),
                  subplot_kws={'projection': ccrs.Orthographic(-80, 35)})
for ax in p.axes.flat:
    ax.coastlines()

Seaborn is Cool

Statistical visualization with Seaborn:


In [ ]:
import seaborn as sns

data = (ds_extremes
        .sel_points(lat=[41.8781, 37.7749], lon=[360-87.6298, 360-122.4194],
                    method='nearest', tolerance=3,
                    dim=xr.DataArray(['Chicago', 'San Francisco'],
                                     name='location', dims='location'))
        .to_dataframe()
        .reset_index()
        .assign(month=lambda x: x.time.dt.month))

plt.figure(figsize=(10, 5))
sns.violinplot('month', 'tmax', 'location', data=data, split=True, inner=None);

7. Out-of-core computation

Here's a quick demo of how xarray can leverage dask to work with data that doesn't fit in memory. This lets xarray substitute for tools like cdo and nco.

Let's open 10 years of runoff data

xarraycan open multiple files at once using string pattern matching.

In this case we open all the files that match our filestr, i.e. all the files for the 2080s.

Each of these files (compressed) is approximately 80 MB.

PS - these files weren't available during the tutorial. The data we used was daily discharge hydrological data from the ISIMIP project (e.g. HadGEM2-ES / PCRGLOBWB / RCP2p6), which we cannot share here but is available for download.


In [ ]:
from glob import glob
files = glob('data/*dis*.nc')
runoff = xr.open_mfdataset(files)

In [ ]:
runoff

xarray even puts them in the right order for you.


In [ ]:
runoff.time

How big is all this data uncompressed? Will it fit into memory?


In [ ]:
runoff.nbytes / 1e9  # Convert to gigiabytes

Working with Big Data

  • This data is too big for our memory.
  • That means we need to process it in chunks.
  • We can do this chunking in xarray very easily.

    xarray computes data 'lazily'. That means that data is only loaded into memory when it is actually required. This also allows us to inspect datasets without loading all the data into memory.

    To do this xarray integrates with dask to support streaming computation on datasets that don’t fit into memory.


In [ ]:
runoff = runoff.chunk({'lat': 60})

In [ ]:
runoff.chunks

In [ ]:
%time ro_seasonal = runoff.groupby('time.season').mean('time')

In [ ]:
import dask
from multiprocessing.pool import ThreadPool
dask.set_options(pool=ThreadPool(1))

In [ ]:
%time ro_seasonal.compute()

In [ ]:
dask.set_options(pool=ThreadPool(4))

In [ ]:
%time ro_seasonal = runoff.groupby('time.season').mean('time')

In [ ]:
%time result = ro_seasonal.compute()

In [ ]:
brazil = dict(lat=slice(10.75, -40.75), lon=slice(-100.25, -25.25))
result.dis.sel(**brazil).plot(col='season', size=4, cmap='Spectral_r')

xarray can do more!

  • concatentaion
  • open network located files with openDAP
  • import and export Pandas DataFrames
  • .nc dump to
  • groupby_bins
  • resampling and reduction

For more details, read this blog post: http://continuum.io/blog/xray-dask