IHE Python course, 2017

Time series manipulation

T.N.Olsthoorn, April 18, 2017

Most scientists and engineers, including hydrologists, physisists, electronic engineers, social scientists and economists are often faced with time series that bear information that is to be extracted or to be used in predictions. Pandas has virtually all the tools that are required to handle time series, while keeping dates and data strictly connected. These time series loaded into pandas then form the basis of further analysis.

Loading into pandas can be done with pd.read_csv, pd.read_table, pd.read_excel as we used before as well as with numerous other functions ready to be using in pandas. Just use tab-complition to see al the possibilities


In [1]:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np

Show which reading functions pandas has as onboard methods.

We can use a coprehension to select what we want:


In [2]:
[d for d in dir(pd) if d.startswith("read")]


Out[2]:
['read_clipboard',
 'read_csv',
 'read_excel',
 'read_fwf',
 'read_gbq',
 'read_hdf',
 'read_html',
 'read_json',
 'read_msgpack',
 'read_pickle',
 'read_sas',
 'read_sql',
 'read_sql_query',
 'read_sql_table',
 'read_stata',
 'read_table']

In [3]:
pd.read_table()


---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-3-2dc5a6a45969> in <module>()
----> 1 pd.read_table()

TypeError: parser_f() missing 1 required positional argument: 'filepath_or_buffer'

In [ ]:
[d for d in dir(pd) if d.startswith("read")]

Hence there's a large number of possibilities.

Move to the directory with the examples. Then print pwd to see if you're there.

Notice, the first part of the pwd command will be different on your computer.


In [ ]:
cd python/IHEcourse2017/exercises/Apr18/

In [ ]:
pwd

See if we have a csv datafile, which is a long year groundwater head series in the south of the Netherlands (chosen more or less at random for its length).


In [ ]:
ls

It's not a bad habit to use os to verify that the file exists.


In [ ]:
import os
os.path.isfile("B50E0133001_1.csv")

Ok, now we will naively try to read it in using pd.read_csv. This may fail or not. If it fails we sharpen the knife by adding or using one or more options provided by pd.read_csv.


In [4]:
pb = pd.read_csv("B50E0133001_1.csv")
pb.head()


---------------------------------------------------------------------------
CParserError                              Traceback (most recent call last)
<ipython-input-4-d556bc408458> in <module>()
----> 1 pb = pd.read_csv("B50E0133001_1.csv")
      2 pb.head()

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
    560                     skip_blank_lines=skip_blank_lines)
    561 
--> 562         return _read(filepath_or_buffer, kwds)
    563 
    564     parser_f.__name__ = name

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    323         return parser
    324 
--> 325     return parser.read()
    326 
    327 _parser_defaults = {

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
    813                 raise ValueError('skip_footer not supported for iteration')
    814 
--> 815         ret = self._engine.read(nrows)
    816 
    817         if self.options.get('as_recarray'):

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
   1312     def read(self, nrows=None):
   1313         try:
-> 1314             data = self._reader.read(nrows)
   1315         except StopIteration:
   1316             if self._first_chunk:

pandas/parser.pyx in pandas.parser.TextReader.read (pandas/parser.c:8748)()

pandas/parser.pyx in pandas.parser.TextReader._read_low_memory (pandas/parser.c:9003)()

pandas/parser.pyx in pandas.parser.TextReader._read_rows (pandas/parser.c:9731)()

pandas/parser.pyx in pandas.parser.TextReader._tokenize_rows (pandas/parser.c:9602)()

pandas/parser.pyx in pandas.parser.raise_parser_error (pandas/parser.c:23325)()

CParserError: Error tokenizing data. C error: Expected 12 fields in line 12, saw 13

Obviously, the read_csv above failed. Upon inspection of the file in an editor, we see that the top is a mess. Not really, but at least we want to sktip this part and get to the actual time series data of interest further down in the file.

So let's skip a few rows (too few, but we can correct step by step)


In [5]:
pb = pd.read_csv("B50E0133001_1.csv", skiprows=9)
pb.head()


Out[5]:
MP: Meetpunt Unnamed: 2 Unnamed: 3 Unnamed: 4 Unnamed: 5 Unnamed: 6 Unnamed: 7 Unnamed: 8 Unnamed: 9 Unnamed: 10 Unnamed: 11
Locatie Filternummer Externe aanduiding X-coordinaat Y-coordinaat Maaiveld (cm t.o.v. NAP) Datum maaiveld gemeten Startdatum Einddatum Meetpunt (cm t.o.v. NAP) Meetpunt (cm t.o.v. MV) Bovenkant filter (cm t.o.v. NAP) Onderkant filter (cm t.o.v. NAP)
B50E0133 001 50EP0133 129287 395441 1360 11-03-1955 11-03-1955 10-09-1969 1401 41 -240 -340
B50E0133 001 50EP0133 129287 395441 1360 10-09-1969 10-09-1969 15-02-2011 1401 41 -240 -340
B50E0133 001 50EP0133 129287 395441 1357 15-02-2011 15-02-2011 13-10-2016 1448 91 -240 -340
Locatie Filternummer Peildatum Stand (cm t.o.v. MP) Stand (cm t.o.v. MV) Stand (cm t.o.v. NAP) Bijzonderheid Opmerking NaN NaN NaN NaN NaN

Ok, we got some top table in the file. See which line pd thought was the header.

Ok. skip a few more lines.


In [6]:
pb = pd.read_csv("B50E0133001_1.csv", skiprows=11)
pb.head()


Out[6]:
Locatie Filternummer Externe aanduiding X-coordinaat Y-coordinaat Maaiveld (cm t.o.v. NAP) Datum maaiveld gemeten Startdatum Einddatum Meetpunt (cm t.o.v. NAP) Meetpunt (cm t.o.v. MV) Bovenkant filter (cm t.o.v. NAP) Onderkant filter (cm t.o.v. NAP)
0 B50E0133 001 50EP0133 129287 395441 1360 11-03-1955 11-03-1955 10-09-1969 1401.0 41.0 -240.0 -340.0
1 B50E0133 001 50EP0133 129287 395441 1360 10-09-1969 10-09-1969 15-02-2011 1401.0 41.0 -240.0 -340.0
2 B50E0133 001 50EP0133 129287 395441 1357 15-02-2011 15-02-2011 13-10-2016 1448.0 91.0 -240.0 -340.0
3 Locatie Filternummer Peildatum Stand (cm t.o.v. MP) Stand (cm t.o.v. MV) Stand (cm t.o.v. NAP) Bijzonderheid Opmerking NaN NaN NaN NaN NaN
4 B50E0133 001 11-03-1955 675 634 726 NaN NaN NaN NaN NaN NaN NaN

Now we really got the first table in the file, but this is not the one we need. On line 3 we see the desired header line. So skip 3 more lines to get there.


In [7]:
pb = pd.read_csv("B50E0133001_1.csv", skiprows=15)
pb.head()


Out[7]:
Locatie Filternummer Peildatum Stand (cm t.o.v. MP) Stand (cm t.o.v. MV) Stand (cm t.o.v. NAP) Bijzonderheid Opmerking Unnamed: 8 Unnamed: 9 Unnamed: 10 Unnamed: 11
0 B50E0133 1 11-03-1955 675.0 634.0 726.0 NaN NaN NaN NaN NaN NaN
1 B50E0133 1 23-03-1955 668.0 627.0 733.0 NaN NaN NaN NaN NaN NaN
2 B50E0133 1 08-04-1955 669.0 628.0 732.0 NaN NaN NaN NaN NaN NaN
3 B50E0133 1 22-04-1955 660.0 619.0 741.0 NaN NaN NaN NaN NaN NaN
4 B50E0133 1 06-05-1955 651.0 610.0 750.0 NaN NaN NaN NaN NaN NaN

This is fine. At least a good start. But we want "Peildatum" as our index. So:


In [8]:
pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum")
pb.head()


Out[8]:
Locatie Filternummer Stand (cm t.o.v. MP) Stand (cm t.o.v. MV) Stand (cm t.o.v. NAP) Bijzonderheid Opmerking Unnamed: 8 Unnamed: 9 Unnamed: 10 Unnamed: 11
Peildatum
11-03-1955 B50E0133 1 675.0 634.0 726.0 NaN NaN NaN NaN NaN NaN
23-03-1955 B50E0133 1 668.0 627.0 733.0 NaN NaN NaN NaN NaN NaN
08-04-1955 B50E0133 1 669.0 628.0 732.0 NaN NaN NaN NaN NaN NaN
22-04-1955 B50E0133 1 660.0 619.0 741.0 NaN NaN NaN NaN NaN NaN
06-05-1955 B50E0133 1 651.0 610.0 750.0 NaN NaN NaN NaN NaN NaN

Better, but the idex still consists of strings and not of dates. Therefore, tell read_csv to part the dates:


In [9]:
pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum", parse_dates=True)
pb.head()


Out[9]:
Locatie Filternummer Stand (cm t.o.v. MP) Stand (cm t.o.v. MV) Stand (cm t.o.v. NAP) Bijzonderheid Opmerking Unnamed: 8 Unnamed: 9 Unnamed: 10 Unnamed: 11
Peildatum
1955-11-03 B50E0133 1 675.0 634.0 726.0 NaN NaN NaN NaN NaN NaN
1955-03-23 B50E0133 1 668.0 627.0 733.0 NaN NaN NaN NaN NaN NaN
1955-08-04 B50E0133 1 669.0 628.0 732.0 NaN NaN NaN NaN NaN NaN
1955-04-22 B50E0133 1 660.0 619.0 741.0 NaN NaN NaN NaN NaN NaN
1955-06-05 B50E0133 1 651.0 610.0 750.0 NaN NaN NaN NaN NaN NaN

Problem is that some dates will be messed up as pandas will by default interprete dates as mm-dd-yyyyy, while we have dd-mm-yyyy. For some dates this does not matter but for other dates this is ambiguous unless it is specified that the dates start with the day instead of the month.


In [10]:
pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum", parse_dates=True, dayfirst=True)
pb.head()


Out[10]:
Locatie Filternummer Stand (cm t.o.v. MP) Stand (cm t.o.v. MV) Stand (cm t.o.v. NAP) Bijzonderheid Opmerking Unnamed: 8 Unnamed: 9 Unnamed: 10 Unnamed: 11
Peildatum
1955-03-11 B50E0133 1 675.0 634.0 726.0 NaN NaN NaN NaN NaN NaN
1955-03-23 B50E0133 1 668.0 627.0 733.0 NaN NaN NaN NaN NaN NaN
1955-04-08 B50E0133 1 669.0 628.0 732.0 NaN NaN NaN NaN NaN NaN
1955-04-22 B50E0133 1 660.0 619.0 741.0 NaN NaN NaN NaN NaN NaN
1955-05-06 B50E0133 1 651.0 610.0 750.0 NaN NaN NaN NaN NaN NaN

In [11]:
pb.head()


Out[11]:
Locatie Filternummer Stand (cm t.o.v. MP) Stand (cm t.o.v. MV) Stand (cm t.o.v. NAP) Bijzonderheid Opmerking Unnamed: 8 Unnamed: 9 Unnamed: 10 Unnamed: 11
Peildatum
1955-03-11 B50E0133 1 675.0 634.0 726.0 NaN NaN NaN NaN NaN NaN
1955-03-23 B50E0133 1 668.0 627.0 733.0 NaN NaN NaN NaN NaN NaN
1955-04-08 B50E0133 1 669.0 628.0 732.0 NaN NaN NaN NaN NaN NaN
1955-04-22 B50E0133 1 660.0 619.0 741.0 NaN NaN NaN NaN NaN NaN
1955-05-06 B50E0133 1 651.0 610.0 750.0 NaN NaN NaN NaN NaN NaN

So far so good. Now do some clean-up as we only need the 6th column with the head above national datum. We can tell read_csv what columns to use by specifying a list of headers. First trial


In [12]:
pb = pd.read_csv("B50E0133001_1.csv", skiprows=15,
                 index_col="Peildatum", parse_dates=True, dayfirst=True,
                usecols=["Stand (cm t.o.v. NAP)"])
pb.head()


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-12-0a65cbcf2cb1> in <module>()
      1 pb = pd.read_csv("B50E0133001_1.csv", skiprows=15,
      2                  index_col="Peildatum", parse_dates=True, dayfirst=True,
----> 3                 usecols=["Stand (cm t.o.v. NAP)"])
      4 pb.head()

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
    560                     skip_blank_lines=skip_blank_lines)
    561 
--> 562         return _read(filepath_or_buffer, kwds)
    563 
    564     parser_f.__name__ = name

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    323         return parser
    324 
--> 325     return parser.read()
    326 
    327 _parser_defaults = {

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
    813                 raise ValueError('skip_footer not supported for iteration')
    814 
--> 815         ret = self._engine.read(nrows)
    816 
    817         if self.options.get('as_recarray'):

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
   1385 
   1386             names, data = self._do_date_conversions(names, data)
-> 1387             index, names = self._make_index(data, alldata, names)
   1388 
   1389         # maybe create a mi on the columns

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in _make_index(self, data, alldata, columns, indexnamerow)
   1027 
   1028         elif not self._has_complex_date_col:
-> 1029             index = self._get_simple_index(alldata, columns)
   1030             index = self._agg_index(index)
   1031 

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in _get_simple_index(self, data, columns)
   1061         index = []
   1062         for idx in self.index_col:
-> 1063             i = ix(idx)
   1064             to_remove.append(i)
   1065             index.append(data[i])

/Users/Theo/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py in ix(col)
   1055             if not isinstance(col, compat.string_types):
   1056                 return col
-> 1057             raise ValueError('Index %s invalid' % col)
   1058         index = None
   1059 

ValueError: Index Peildatum invalid

This failed, because we now have to specify all columns we want to use. This should include the columne "Peildatum". So add it to the list.


In [13]:
pb = pd.read_csv("B50E0133001_1.csv", skiprows=15,
                 index_col="Peildatum", parse_dates=True, dayfirst=True,
                usecols=["Peildatum", "Stand (cm t.o.v. NAP)"])
pb.head()


Out[13]:
Stand (cm t.o.v. NAP)
Peildatum
1955-03-11 726.0
1955-03-23 733.0
1955-04-08 732.0
1955-04-22 741.0
1955-05-06 750.0

This is fine. We now have a one-column dataFrame with the proper index.

For English speakers, change the column header for better readability.


In [14]:
pb.columns = ["NAP"]
pb.head()


Out[14]:
NAP
Peildatum
1955-03-11 726.0
1955-03-23 733.0
1955-04-08 732.0
1955-04-22 741.0
1955-05-06 750.0

Check that pb is still a data frame, and only when we select one column from a dataFrame it becomes a series.


In [15]:
print(type(pb))
print(type(pb['NAP']))


<class 'pandas.core.frame.DataFrame'>
<class 'pandas.core.series.Series'>

So select this column to get a time series.


In [16]:
pb = pb['NAP']
print(type(pb))


<class 'pandas.core.series.Series'>

Dataframes and series can immediately be plotted. Of course, you may also plot titles on the axes and above the plot. But because of lazyness, I leave this out for this exercise.


In [17]:
pb.plot()
plt.show() # default color is blue, and default plot is line.


The next problem is to get the mean of the highest three measurements within each hydrological year, which starts on April 1 and ends at March 31.

This requires resampling the data per hydrologic year.

Which can be done with aliases put in the rule of the resample function of pandas series and dataFrames.

Here are options:

Offset aliases (previously alled time rules) that can be used or resampling a time series or a dataFrame

http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases

B business day frequency C custom business day frequency (experimental) D calendar day frequency W weekly frequency M month end frequency SM semi-month end frequency (15th and end of month) BM business month end frequency CBM custom business month end frequency MS month start frequency SMS semi-month start frequency (1st and 15th) BMS business month start frequency CBMS custom business month start frequency Q quarter end frequency BQ business quarter endfrequency QS quarter start frequency BQS business quarter start frequency A year end frequency BA business year end frequency AS year start frequency BAS business year start frequency BH business hour frequency H hourly frequency T minutely frequency S secondly frequency L milliseonds U microseconds N nanoseconds

But fo sample at some arbitrary interval we need anchored offsets as the resample rule. Here are the options.

Anchored offsets

http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases

For some frequencies you can specify an anchoring suffix:

Alias Description W-SUN weekly frequency (sundays). Same as ‘W’ W-MON weekly frequency (mondays) W-TUE weekly frequency (tuesdays) W-WED weekly frequency (wednesdays) W-THU weekly frequency (thursdays) W-FRI weekly frequency (fridays) W-SAT weekly frequency (saturdays) (B)Q(S)-DEC quarterly frequency, year ends in December. Same as ‘Q’ (B)Q(S)-JAN quarterly frequency, year ends in January (B)Q(S)-FEB quarterly frequency, year ends in February (B)Q(S)-MAR quarterly frequency, year ends in March (B)Q(S)-APR quarterly frequency, year ends in April (B)Q(S)-MAY quarterly frequency, year ends in May (B)Q(S)-JUN quarterly frequency, year ends in June (B)Q(S)-JUL quarterly frequency, year ends in July (B)Q(S)-AUG quarterly frequency, year ends in August (B)Q(S)-SEP quarterly frequency, year ends in September (B)Q(S)-OCT quarterly frequency, year ends in October (B)Q(S)-NOV quarterly frequency, year ends in November (B)A(S)-DEC annual frequency, anchored end of December. Same as ‘A’ (B)A(S)-JAN annual frequency, anchored end of January (B)A(S)-FEB annual frequency, anchored end of February (B)A(S)-MAR annual frequency, anchored end of March (B)A(S)-APR annual frequency, anchored end of April (B)A(S)-MAY annual frequency, anchored end of May (B)A(S)-JUN annual frequency, anchored end of June (B)A(S)-JUL annual frequency, anchored end of July (B)A(S)-AUG annual frequency, anchored end of August (B)A(S)-SEP annual frequency, anchored end of September (B)A(S)-OCT annual frequency, anchored end of October (B)A(S)-NOV annual frequency, anchored end of November

To see this at work. Resample the time series by hydrological year and compute the mean head in every hydrological year. This can be done as follows:


In [18]:
pb.resample("AS").mean().head()


Out[18]:
Peildatum
1955-01-01    684.176471
1956-01-01    711.090909
1957-01-01    707.454545
1958-01-01    767.250000
1959-01-01    710.500000
Freq: AS-JAN, Name: NAP, dtype: float64

In [19]:
pb.resample("AS-APR").mean().head()


Out[19]:
Peildatum
1954-04-01    729.500000
1955-04-01    685.411765
1956-04-01    706.909091
1957-04-01    722.666667
1958-04-01    766.750000
Freq: AS-APR, Name: NAP, dtype: float64

This uses Groupby functionality. Which we'll inspect next.

In fact, pb.resample(...) yields a DatetimeIndexResampler


In [43]:
Z = pb.resample("AS-APR")
type(Z)


Out[43]:
pandas.tseries.resample.DatetimeIndexResampler

This resampler has its own functinality that can be used. This fucntionality is shown here:


In [21]:
[z for z in dir(Z) if not z.startswith("_")]


Out[21]:
['agg',
 'aggregate',
 'apply',
 'asfreq',
 'ax',
 'backfill',
 'bfill',
 'count',
 'ffill',
 'fillna',
 'first',
 'get_group',
 'groups',
 'indices',
 'interpolate',
 'last',
 'max',
 'mean',
 'median',
 'min',
 'name',
 'ndim',
 'ngroups',
 'nunique',
 'obj',
 'ohlc',
 'pad',
 'plot',
 'prod',
 'sem',
 'size',
 'std',
 'sum',
 'transform',
 'var']

It's now easy to plot the resampled data using several of the functions, like so:

Notice that Z.mean() is a pandas series so that Z.mean().plot() is plot method of the pandas series.


In [41]:
Z.max().plot(label="max")
Z.mean().plot(label="mean")
Z.min().plot(label="min")
plt.title("The max, mean and min of the head in each hydrological year")
plt.legend(loc='best')
plt.show()



In [45]:
Z.max()
for z in Z:
    print(z)


(Timestamp('1954-04-01 00:00:00', offset='AS-APR'), Peildatum
1955-03-11    726.0
1955-03-23    733.0
Name: NAP, dtype: float64)
(Timestamp('1955-04-01 00:00:00', offset='AS-APR'), Peildatum
1955-04-08    732.0
1955-04-22    741.0
1955-05-06    750.0
1955-05-20    751.0
1955-06-03    744.0
1955-06-17    741.0
1955-07-01    662.0
1955-07-15    634.0
1955-07-29    612.0
1955-08-12    691.0
1955-08-26    607.0
1955-09-06    609.0
1955-10-04    634.0
1955-11-12    611.0
1955-12-11    653.0
1956-01-15    706.0
1956-03-18    774.0
Name: NAP, dtype: float64)
(Timestamp('1956-04-01 00:00:00', offset='AS-APR'), Peildatum
1956-04-15    688.0
1956-05-13    723.0
1956-06-17    723.0
1956-07-15    711.0
1956-08-19    721.0
1956-09-16    720.0
1956-10-14    716.0
1956-11-18    664.0
1956-12-18    676.0
1957-01-15    708.0
1957-03-17    726.0
Name: NAP, dtype: float64)
(Timestamp('1957-04-01 00:00:00', offset='AS-APR'), Peildatum
1957-04-14    741.0
1957-05-14    707.0
1957-06-16    735.0
1957-07-14    687.0
1957-08-18    691.0
1957-09-15    666.0
1957-10-13    691.0
1957-11-17    711.0
1957-12-15    719.0
1958-01-14    737.0
1958-02-16    771.0
1958-03-16    816.0
Name: NAP, dtype: float64)
(Timestamp('1958-04-01 00:00:00', offset='AS-APR'), Peildatum
1958-04-13    820.0
1958-05-11    794.0
1958-06-15    798.0
1958-07-13    789.0
1958-08-17    797.0
1958-09-14    748.0
1958-10-12    735.0
1958-11-16    696.0
1958-12-14    706.0
1959-01-20    756.0
1959-02-15    789.0
1959-03-15    773.0
Name: NAP, dtype: float64)
(Timestamp('1959-04-01 00:00:00', offset='AS-APR'), Peildatum
1959-04-12    784.0
1959-05-10    775.0
1959-06-14    714.0
1959-07-12    677.0
1959-08-16    686.0
1959-09-13    652.0
1959-10-11    643.0
1959-11-15    631.0
1959-12-13    646.0
1960-01-24    651.0
1960-02-14    656.0
1960-03-13    664.0
Name: NAP, dtype: float64)
(Timestamp('1960-04-01 00:00:00', offset='AS-APR'), Peildatum
1960-04-10    659.0
1960-05-08    644.0
1960-06-12    633.0
1960-07-10    607.0
1960-08-21    615.0
1960-09-11    584.0
1960-10-09    601.0
1960-11-13    649.0
1960-12-11    688.0
1961-01-15    760.0
1961-02-12    794.0
1961-03-12    799.0
Name: NAP, dtype: float64)
(Timestamp('1961-04-01 00:00:00', offset='AS-APR'), Peildatum
1961-04-16    801.0
1961-05-14    819.0
1961-06-11    851.0
1961-07-16    767.0
1961-08-01    759.0
1961-09-10    746.0
1961-10-15    713.0
1961-11-26    725.0
1961-12-10    748.0
1962-01-14    808.0
1962-02-11    823.0
1962-03-11    804.0
Name: NAP, dtype: float64)
(Timestamp('1962-04-01 00:00:00', offset='AS-APR'), Peildatum
1962-04-13    829.0
1962-05-13    859.0
1962-06-17    837.0
1962-07-08    809.0
1962-08-15    787.0
1962-09-09    756.0
1962-10-14    735.0
1962-11-11    701.0
1962-12-09    692.0
1963-02-10    692.0
1963-03-17    677.0
Name: NAP, dtype: float64)
(Timestamp('1963-04-01 00:00:00', offset='AS-APR'), Peildatum
1963-04-21    701.0
1963-05-21    670.0
1963-06-11    656.0
1963-07-14    638.0
1963-08-18    644.0
1963-09-15    648.0
1963-10-13    645.0
1963-11-10    646.0
1964-01-19    672.0
1964-02-18    694.0
1964-03-18    701.0
Name: NAP, dtype: float64)
(Timestamp('1964-04-01 00:00:00', offset='AS-APR'), Peildatum
1964-04-12    690.0
1964-05-10    690.0
1964-06-14    679.0
1964-07-14    696.0
1964-08-16    737.0
1964-09-13    706.0
1965-03-17    764.0
Name: NAP, dtype: float64)
(Timestamp('1965-04-01 00:00:00', offset='AS-APR'), Peildatum
1965-04-14    723.0
1965-05-12    797.0
1965-05-26    784.0
1965-06-09    797.0
1965-06-23    786.0
1965-07-07    785.0
1965-07-21    784.0
1965-08-04    817.0
1965-08-18    819.0
1965-09-01    795.0
1965-09-15    806.0
1965-09-29    795.0
1965-10-13    801.0
1965-10-27    789.0
1965-11-10    809.0
1965-11-24    795.0
1965-12-08    801.0
1965-12-22    825.0
1966-01-05    883.0
1966-01-19    893.0
1966-02-02    911.0
1966-02-16    905.0
1966-03-02    930.0
1966-03-30    951.0
Name: NAP, dtype: float64)
(Timestamp('1966-04-01 00:00:00', offset='AS-APR'), Peildatum
1966-04-13    941.0
1966-04-27    981.0
1966-05-11    933.0
1966-05-25    925.0
1966-06-08    901.0
1966-06-22    905.0
1966-07-06    889.0
1966-07-20    928.0
1966-08-03    907.0
1966-08-17    897.0
1966-08-31    885.0
1966-09-14    891.0
1966-09-28    865.0
1966-10-12    870.0
1966-10-26    857.0
1966-11-09    865.0
1966-11-23    853.0
1966-12-07    893.0
1967-01-04    939.0
1967-01-18    931.0
1967-02-01    943.0
1967-02-15    939.0
1967-03-01    917.0
1967-03-15    933.0
1967-03-29    943.0
Name: NAP, dtype: float64)
(Timestamp('1967-04-01 00:00:00', offset='AS-APR'), Peildatum
1967-04-12    929.0
1967-04-26    926.0
1967-05-10    921.0
1967-05-24    927.0
1967-06-07    906.0
1967-06-21    912.0
1967-07-05    905.0
1967-07-19    905.0
1967-08-02    899.0
1967-08-16    893.0
1967-08-30    867.0
1967-09-13    863.0
1967-09-27    843.0
1967-10-11    850.0
1967-10-25    843.0
1967-11-08    836.0
1967-11-22    821.0
1967-12-06    811.0
1967-12-20    813.0
1968-01-03    843.0
1968-01-17    843.0
1968-01-31    866.0
1968-02-14    873.0
1968-02-28    890.0
1968-03-13    883.0
1968-03-27    893.0
Name: NAP, dtype: float64)
(Timestamp('1968-04-01 00:00:00', offset='AS-APR'), Peildatum
1968-04-10     873.0
1968-04-24     866.0
1968-05-08     859.0
1968-05-22     853.0
1968-06-05     860.0
1968-06-19     841.0
1968-07-03     841.0
1968-07-17     853.0
1968-07-31     863.0
1968-08-14     851.0
1968-08-28     861.0
1968-09-11     848.0
1968-09-25    1051.0
1968-10-09    1053.0
1968-10-23    1117.0
1968-11-06    1120.0
1968-11-20     829.0
1968-12-04     821.0
1968-12-18     831.0
1969-01-02     848.0
1969-01-15     849.0
1969-01-29     848.0
1969-02-13     844.0
1969-02-26     849.0
1969-03-12     847.0
1969-03-26     857.0
Name: NAP, dtype: float64)
(Timestamp('1969-04-01 00:00:00', offset='AS-APR'), Peildatum
1969-04-09    873.0
1969-04-23    859.0
1969-05-07    851.0
1969-05-21    856.0
1969-06-05    884.0
1969-06-18    892.0
1969-07-02    867.0
1969-07-16    880.0
1969-07-31    880.0
1969-08-13    864.0
1969-08-27    871.0
1969-09-10    869.0
1969-09-24    776.0
1969-09-29    876.0
1969-10-08    867.0
1969-10-22    855.0
1969-11-05    863.0
1969-11-19    856.0
1969-12-03    863.0
1969-12-17    859.0
1969-12-31    867.0
1970-01-14    863.0
1970-01-28    901.0
1970-02-11    863.0
1970-02-25    906.0
1970-03-11    921.0
1970-03-25    923.0
Name: NAP, dtype: float64)
(Timestamp('1970-04-01 00:00:00', offset='AS-APR'), Peildatum
1970-04-08    943.0
1970-04-22    951.0
1970-05-06    951.0
1970-05-20    947.0
1970-06-03    937.0
1970-06-17    935.0
1970-07-01    926.0
1970-07-15    909.0
1970-07-29    936.0
1970-08-12    901.0
1970-08-26    901.0
1970-09-09    897.0
1970-09-23    851.0
1970-10-07    885.0
1970-10-21    778.0
1970-11-04    875.0
1970-11-18    874.0
1970-12-02    871.0
1970-12-16    877.0
1970-12-30    879.0
1971-01-13    897.0
1971-01-27    879.0
1971-02-10    878.0
1971-02-24    875.0
1971-03-10    881.0
1971-03-24    851.0
Name: NAP, dtype: float64)
(Timestamp('1971-04-01 00:00:00', offset='AS-APR'), Peildatum
1971-04-07    879.0
1971-04-21    866.0
1971-05-05    873.0
1971-05-19    871.0
1971-06-02    867.0
1971-06-16    859.0
1971-06-30    861.0
1971-07-14    863.0
1971-07-28    867.0
1971-08-11    861.0
1971-08-25    861.0
1971-09-08    860.0
1971-09-22    849.0
1971-10-06    851.0
1971-10-20    839.0
1971-11-03    841.0
1971-11-17    844.0
1971-12-01    851.0
1971-12-15    851.0
1971-12-29    856.0
1972-01-12    854.0
1972-01-26    846.0
1972-02-09    848.0
1972-02-23    844.0
1972-03-08    851.0
1972-03-22    866.0
Name: NAP, dtype: float64)
(Timestamp('1972-04-01 00:00:00', offset='AS-APR'), Peildatum
1972-04-05    851.0
1972-04-18    855.0
1972-05-03    854.0
1972-05-17    854.0
1972-05-31    809.0
1972-06-14    853.0
1972-06-28    851.0
1972-07-12    846.0
1972-07-26    846.0
1972-08-09    851.0
1972-08-23    867.0
1972-09-06    870.0
1972-09-20    869.0
1972-10-04    868.0
1972-10-18    866.0
1972-11-01    868.0
1972-11-15    867.0
1972-11-29    873.0
1972-12-13    875.0
1972-12-27    879.0
1973-01-24    878.0
1973-02-07    884.0
1973-02-21    892.0
1973-03-07    897.0
1973-03-21    904.0
Name: NAP, dtype: float64)
(Timestamp('1973-04-01 00:00:00', offset='AS-APR'), Peildatum
1973-04-04    903.0
1973-04-18    911.0
1973-05-02    963.0
1973-05-16    914.0
1973-05-30    918.0
1973-06-13    919.0
1973-06-27    918.0
1973-07-12    914.0
1973-07-25    919.0
1973-08-08    905.0
1973-08-22    910.0
1973-09-06    906.0
1973-09-19    906.0
1973-10-03    902.0
1973-10-17    900.0
1973-10-31    900.0
1973-11-14    899.0
1973-11-28    897.0
1973-12-12    898.0
1973-12-27    910.0
1974-01-07    916.0
1974-01-23    921.0
1974-02-06    929.0
1974-02-20    936.0
1974-03-07    947.0
1974-03-20    954.0
Name: NAP, dtype: float64)
(Timestamp('1974-04-01 00:00:00', offset='AS-APR'), Peildatum
1974-04-03     958.0
1974-04-17     961.0
1974-05-01     965.0
1974-05-15     957.0
1974-05-29     957.0
1974-06-12     952.0
1974-06-26     951.0
1974-07-10     947.0
1974-07-24     949.0
1974-08-07     946.0
1974-08-21     943.0
1974-09-04     939.0
1974-09-18     937.0
1974-10-02     939.0
1974-10-16     941.0
1974-10-30     931.0
1974-11-13     951.0
1974-11-27     951.0
1974-12-11     970.0
1974-12-24     998.0
1975-01-08    1003.0
1975-01-22    1024.0
1975-02-05    1031.0
1975-02-19    1036.0
1975-03-05    1041.0
1975-03-19    1043.0
Name: NAP, dtype: float64)
(Timestamp('1975-04-01 00:00:00', offset='AS-APR'), Peildatum
1975-04-02    1041.0
1975-04-16    1059.0
1975-04-29    1059.0
1975-05-14    1061.0
1975-05-28    1050.0
1975-06-11    1036.0
1975-06-25    1035.0
1975-07-09    1011.0
1975-07-23    1007.0
1975-08-06     999.0
1975-08-20     965.0
1975-09-03     990.0
1975-09-17     989.0
1975-10-01     986.0
1975-10-15     979.0
1975-10-29     968.0
1975-11-12     971.0
1975-11-26     979.0
1975-12-10     963.0
1975-12-24     953.0
1976-01-07     978.0
1976-01-21     981.0
1976-02-04     971.0
1976-02-18     981.0
1976-03-03    1049.0
1976-03-17     976.0
1976-03-31     977.0
Name: NAP, dtype: float64)
(Timestamp('1976-04-01 00:00:00', offset='AS-APR'), Peildatum
1976-04-14    976.0
1976-04-28    975.0
1976-05-12    966.0
1976-05-26    976.0
1976-06-09    949.0
1976-06-23    957.0
1976-07-07    957.0
1976-07-21    951.0
1976-08-04    950.0
1976-08-18    949.0
1976-09-01    953.0
1976-09-15    949.0
1976-09-29    949.0
1976-10-13    936.0
1976-10-27    953.0
1976-11-10    953.0
1976-11-24    953.0
1976-12-08    955.0
1976-12-22    934.0
1977-01-05    931.0
1977-01-19    934.0
1977-02-02    946.0
1977-02-16    953.0
1977-03-02    969.0
1977-03-17    954.0
1977-03-30    971.0
Name: NAP, dtype: float64)
(Timestamp('1977-04-01 00:00:00', offset='AS-APR'), Peildatum
1977-04-13     822.0
1977-04-27     883.0
1977-05-11     810.0
1977-05-25     893.0
1977-06-08     988.0
1977-06-22     995.0
1977-07-06     993.0
1977-07-20     990.0
1977-08-03     951.0
1977-08-18     981.0
1977-08-31     975.0
1977-09-14     979.0
1977-09-28     979.0
1977-10-12     986.0
1977-10-26     986.0
1977-11-09     984.0
1977-11-23     995.0
1977-12-07    1000.0
1977-12-21     996.0
1978-01-04    1014.0
1978-01-18    1001.0
1978-02-01    1025.0
1978-02-15    1031.0
1978-03-01    1029.0
1978-03-15    1028.0
1978-03-29    1033.0
Name: NAP, dtype: float64)
(Timestamp('1978-04-01 00:00:00', offset='AS-APR'), Peildatum
1978-04-12    1032.0
1978-04-26    1033.0
1978-05-10    1041.0
1978-05-24    1045.0
1978-06-07    1042.0
1978-06-21    1043.0
1978-07-05    1019.0
1978-07-19    1014.0
1978-08-02    1026.0
1978-08-16    1013.0
1978-08-30    1008.0
1978-09-13    1003.0
1978-09-27    1003.0
1978-10-11    1016.0
1978-10-25    1003.0
1978-11-08    1003.0
1978-11-22     920.0
1978-12-06    1009.0
1978-12-20    1006.0
1979-01-17    1029.0
1979-01-31    1037.0
1979-02-14    1036.0
1979-02-28    1046.0
1979-03-14    1056.0
1979-03-28    1076.0
Name: NAP, dtype: float64)
(Timestamp('1979-04-01 00:00:00', offset='AS-APR'), Peildatum
1979-04-11    1090.0
1979-04-25    1086.0
1979-05-09    1086.0
1979-05-23    1093.0
1979-06-06    1010.0
1979-06-20    1061.0
1979-07-04    1082.0
1979-07-18    1071.0
1979-08-15    1061.0
1979-08-29    1056.0
1979-09-12    1055.0
1979-09-26    1037.0
1979-10-10    1036.0
1979-10-24    1028.0
1979-11-07    1033.0
1979-11-21    1059.0
1979-12-05    1039.0
1979-12-19    1068.0
1980-01-02    1076.0
1980-01-16    1077.0
1980-01-30    1071.0
1980-02-13    1083.0
1980-02-27    1087.0
1980-03-12    1091.0
1980-03-26    1089.0
Name: NAP, dtype: float64)
(Timestamp('1980-04-01 00:00:00', offset='AS-APR'), Peildatum
1980-04-09    1085.0
1980-04-23    1079.0
1980-05-07    1081.0
1980-05-21    1035.0
1980-06-04    1044.0
1980-06-18    1033.0
1980-07-02    1065.0
1980-07-16    1065.0
1980-07-30    1069.0
1980-08-13    1069.0
1980-08-27    1081.0
1980-09-10    1075.0
1980-09-24    1051.0
1980-10-08    1085.0
1980-10-22    1041.0
1980-11-05    1040.0
1980-11-19    1035.0
1980-12-03    1039.0
1980-12-17    1059.0
1980-12-31    1047.0
1981-01-14     977.0
1981-01-28    1077.0
1981-02-11    1080.0
1981-02-24    1019.0
1981-02-25    1078.0
1981-03-11    1086.0
1981-03-25    1106.0
Name: NAP, dtype: float64)
(Timestamp('1981-04-01 00:00:00', offset='AS-APR'), Peildatum
1981-04-08    1104.0
1981-04-22    1099.0
1981-05-06    1091.0
1981-05-20    1088.0
1981-06-03    1085.0
1981-06-17    1081.0
1981-07-01    1070.0
1981-07-15    1053.0
1981-07-29    1060.0
1981-08-12    1046.0
1981-08-26    1046.0
1981-09-09    1042.0
1981-09-23    1040.0
1981-10-07     983.0
1981-10-21    1034.0
1981-11-03    1033.0
1981-11-18    1089.0
1981-12-02    1036.0
1981-12-16    1056.0
1981-12-30    1062.0
1982-01-13    1078.0
1982-01-27    1051.0
1982-02-10     977.0
1982-02-24    1019.0
1982-03-10     977.0
1982-03-24    1072.0
Name: NAP, dtype: float64)
(Timestamp('1982-04-01 00:00:00', offset='AS-APR'), Peildatum
1982-04-07    1080.0
1982-04-21    1019.0
1982-05-06    1090.0
1982-05-20    1070.0
1982-06-02    1065.0
1982-06-16    1064.0
1982-06-30    1061.0
1982-07-14    1068.0
1982-07-28    1055.0
1982-08-11    1053.0
1982-08-25    1051.0
1982-09-08    1046.0
1982-09-21    1036.0
1982-10-06    1055.0
1982-10-20    1042.0
1982-11-03    1042.0
1982-11-17    1066.0
1982-12-01    1086.0
1982-12-15    1084.0
1982-12-29    1086.0
1983-01-12    1066.0
1983-01-26    1078.0
1983-02-02    1038.0
1983-02-09    1038.0
1983-02-23    1081.0
1983-03-09    1090.0
1983-03-23    1083.0
Name: NAP, dtype: float64)
(Timestamp('1983-04-01 00:00:00', offset='AS-APR'), Peildatum
1983-04-06    1095.0
1983-04-20    1098.0
1983-05-04    1105.0
1983-05-18    1116.0
1983-06-01    1119.0
1983-06-15    1106.0
1983-06-29    1097.0
1983-07-13    1083.0
1983-07-27    1073.0
1983-08-02    1067.0
1983-08-10    1067.0
1983-08-24    1058.0
1983-09-07    1055.0
1983-09-21    1049.0
1983-10-05    1045.0
1983-10-19     941.0
1983-11-02    1039.0
1983-11-11    1038.0
1983-11-30    1146.0
1983-12-14    1039.0
1983-12-28    1041.0
1984-01-11    1048.0
1984-01-25    1058.0
1984-02-08    1081.0
1984-02-22    1097.0
1984-03-07    1089.0
1984-03-21    1081.0
Name: NAP, dtype: float64)
(Timestamp('1984-04-01 00:00:00', offset='AS-APR'), Peildatum
1984-04-04    1082.0
1984-04-18    1079.0
1984-05-02    1079.0
1984-05-16    1078.0
1984-05-30    1071.0
1984-06-13    1079.0
1984-06-27    1079.0
1984-07-11    1075.0
1984-07-26    1071.0
1984-08-08    1169.0
1984-08-22    1056.0
1984-09-05    1054.0
1984-09-19    1089.0
1984-10-03    1061.0
1984-10-17    1048.0
1984-10-31    1026.0
1984-11-14    1086.0
1984-11-28    1088.0
1984-12-12    1087.0
1985-01-02    1086.0
1985-01-09    1083.0
1985-01-23    1078.0
1985-02-06    1086.0
1985-02-20    1078.0
1985-03-06    1076.0
1985-03-20    1074.0
Name: NAP, dtype: float64)
(Timestamp('1985-04-01 00:00:00', offset='AS-APR'), Peildatum
1985-04-03    1100.0
1985-04-17    1079.0
1985-05-01    1009.0
1985-05-15    1088.0
1985-05-28    1075.0
1985-06-12    1156.0
1985-06-26    1071.0
1985-07-10    1067.0
1985-07-25    1066.0
1985-08-07    1059.0
1985-08-21    1053.0
1985-09-04    1053.0
1985-09-18    1045.0
1985-10-02    1051.0
1985-10-16    1037.0
1985-10-30    1009.0
1985-11-13    1033.0
1985-11-27    1036.0
1985-12-11    1039.0
1985-12-23    1039.0
1986-01-08    1043.0
1986-01-22    1055.0
1986-02-05    1082.0
1986-02-19    1077.0
1986-03-05    1071.0
1986-03-19    1066.0
Name: NAP, dtype: float64)
(Timestamp('1986-04-01 00:00:00', offset='AS-APR'), Peildatum
1986-04-02    1068.0
1986-04-16    1083.0
1986-04-30    1086.0
1986-05-14    1071.0
1986-05-28    1070.0
1986-06-11    1065.0
1986-06-25    1057.0
1986-07-08    1050.0
1986-07-23    1041.0
1986-08-06    1085.0
1986-08-20    1029.0
1986-09-03    1026.0
1986-09-17    1028.0
1986-10-01    1021.0
1986-10-15     983.0
1986-10-29     995.0
1986-11-12    1011.0
1986-11-26    1015.0
1986-12-10    1011.0
1986-12-23    1076.0
1987-01-07    1053.0
1987-01-21    1053.0
1987-02-04    1057.0
1987-02-18    1059.0
1987-03-04    1063.0
1987-03-18    1071.0
Name: NAP, dtype: float64)
(Timestamp('1987-04-01 00:00:00', offset='AS-APR'), Peildatum
1987-04-01    1071.0
1987-04-16    1075.0
1987-04-29    1076.0
1987-05-13    1071.0
1987-05-26    1081.0
1987-06-10    1063.0
1987-06-24    1073.0
1987-07-08    1061.0
1987-07-22    1056.0
1987-08-05    1056.0
1987-08-19    1051.0
1987-08-30    1052.0
1987-09-02    1053.0
1987-09-16    1049.0
1987-09-30    1052.0
1987-10-14    1077.0
1987-10-15    1006.0
1987-10-28    1033.0
1987-11-11    1031.0
1987-11-15    1031.0
1987-11-25    1027.0
1987-12-09    1041.0
1987-12-15    1036.0
1987-12-23    1038.0
1988-01-06    1040.0
1988-01-15    1046.0
1988-01-20    1055.0
1988-02-03    1058.0
1988-02-15    1066.0
1988-03-02    1069.0
1988-03-15    1066.0
1988-03-30    1078.0
Name: NAP, dtype: float64)
(Timestamp('1988-04-01 00:00:00', offset='AS-APR'), Peildatum
1988-04-15    1081.0
1988-04-27    1072.0
1988-05-10    1067.0
1988-05-15    1066.0
1988-05-25    1062.0
1988-06-08    1056.0
1988-06-15    1056.0
1988-06-22    1051.0
1988-07-06    1046.0
1988-07-15    1046.0
1988-07-20    1045.0
1988-08-03    1040.0
1988-08-15    1031.0
1988-08-17    1036.0
1988-08-31    1033.0
1988-09-14    1030.0
1988-09-15    1031.0
1988-09-27    1038.0
1988-10-12    1023.0
1988-10-15    1036.0
1988-10-26    1066.0
1988-11-09    1046.0
1988-11-15    1046.0
1988-11-23    1040.0
1988-12-07    1038.0
1988-12-15    1036.0
1988-12-21    1046.0
1989-01-05    1057.0
1989-01-19    1056.0
1989-02-02    1055.0
1989-02-16    1055.0
1989-03-01    1053.0
1989-03-11    1070.0
1989-03-15    1070.0
1989-03-29    1072.0
Name: NAP, dtype: float64)
(Timestamp('1989-04-01 00:00:00', offset='AS-APR'), Peildatum
1989-04-12    1073.0
1989-04-26    1071.0
1989-05-10    1074.0
1989-05-24    1065.0
1989-06-07    1059.0
1989-06-21    1049.0
1989-07-05    1042.0
1989-07-19    1035.0
1989-08-02    1029.0
1989-08-16    1025.0
1989-08-30    1020.0
1989-09-13    1016.0
1989-09-27    1011.0
1989-10-11    1008.0
1989-10-25    1004.0
1989-11-08    1002.0
1989-11-22     999.0
1989-12-07     991.0
1989-12-20     987.0
1990-01-03     985.0
1990-01-17     983.0
1990-01-31     987.0
1990-02-14     992.0
1990-02-21     992.0
1990-02-27    1001.0
1990-03-14    1010.0
1990-03-28    1016.0
Name: NAP, dtype: float64)
(Timestamp('1990-04-01 00:00:00', offset='AS-APR'), Peildatum
1990-04-11    1015.0
1990-04-25    1012.0
1990-05-09    1011.0
1990-05-21    1012.0
1990-06-06    1007.0
1990-06-20    1005.0
1990-07-04    1004.0
1990-07-19     997.0
1990-08-01     994.0
1990-08-15     990.0
1990-08-29     985.0
1990-09-12     982.0
1990-09-26     979.0
1990-10-10     977.0
1990-10-23     974.0
1990-11-07     971.0
1990-11-21     974.0
1990-12-05     973.0
1990-12-20     974.0
1991-01-02     981.0
1991-01-16     998.0
1991-01-30    1009.0
1991-02-13    1007.0
1991-02-27    1011.0
1991-03-12    1009.0
1991-03-27    1008.0
Name: NAP, dtype: float64)
(Timestamp('1991-04-01 00:00:00', offset='AS-APR'), Peildatum
1991-04-10    1007.0
1991-04-24    1003.0
1991-05-08    1002.0
1991-05-22     996.0
1991-06-04     995.0
1991-06-19     991.0
1991-07-03     987.0
1991-07-17     987.0
1991-08-01     985.0
1991-08-15     982.0
1991-08-28     978.0
1991-09-11     974.0
1991-09-25     971.0
1991-10-09     970.0
1991-10-23     966.0
1991-11-06     966.0
1991-11-20     970.0
1991-12-05     977.0
1991-12-18     979.0
1992-01-02     987.0
1992-01-15     994.0
1992-01-28     996.0
1992-02-12    1001.0
1992-02-26     998.0
1992-03-11     998.0
1992-03-25    1004.0
Name: NAP, dtype: float64)
(Timestamp('1992-04-01 00:00:00', offset='AS-APR'), Peildatum
1992-04-08    1002.0
1992-04-22    1008.0
1992-05-06    1007.0
1992-05-20    1008.0
1992-06-04    1005.0
1992-06-17     999.0
1992-07-01     996.0
1992-07-15     992.0
1992-07-29     989.0
1992-08-12     984.0
1992-08-26     983.0
1992-09-09     982.0
1992-09-22     984.0
1992-10-07     967.0
1992-10-21     976.0
1992-11-04     971.0
1992-11-18     975.0
1992-12-03     989.0
1992-12-15    1000.0
1993-01-07    1012.0
1993-01-20    1018.0
1993-02-03    1023.0
1993-02-17    1026.0
1993-03-03    1026.0
1993-03-18    1026.0
1993-03-31    1024.0
Name: NAP, dtype: float64)
(Timestamp('1993-04-01 00:00:00', offset='AS-APR'), Peildatum
1993-04-14    1019.0
1993-04-28    1015.0
1993-05-13    1011.0
1993-05-27    1006.0
1993-06-09    1003.0
1993-06-23     999.0
1993-07-08     994.0
1993-07-22     992.0
1993-08-04     988.0
1993-08-18     986.0
1993-09-02     982.0
1993-09-16     983.0
1993-09-29     984.0
1993-10-14     986.0
1993-10-28     968.0
1993-11-10     989.0
1993-11-24     995.0
1993-12-08     998.0
1993-12-22    1005.0
1994-01-05    1039.0
1994-01-19    1059.0
1994-02-02    1069.0
1994-02-16    1074.0
1994-03-02    1069.0
1994-03-16    1068.0
1994-03-31    1072.0
Name: NAP, dtype: float64)
(Timestamp('1994-04-01 00:00:00', offset='AS-APR'), Peildatum
1994-04-13    1092.0
1994-04-27    1088.0
1994-05-10    1083.0
1994-05-25    1077.0
1994-06-08    1072.0
1994-06-23    1068.0
1994-07-06    1059.0
1994-07-19    1052.0
1994-08-03    1041.0
1994-08-18    1035.0
1994-09-01    1028.0
1994-09-15    1025.0
1994-09-28    1020.0
1994-10-12    1017.0
1994-10-26    1013.0
1994-11-09    1017.0
1994-11-23    1017.0
1994-12-07    1022.0
1994-12-22    1026.0
1995-01-11    1056.0
1995-01-25    1066.0
1995-02-08    1100.0
1995-02-22    1103.0
1995-03-08    1104.0
1995-03-22    1094.0
Name: NAP, dtype: float64)
(Timestamp('1995-04-01 00:00:00', offset='AS-APR'), Peildatum
1995-04-04    1099.0
1995-04-19    1096.0
1995-05-03    1083.0
1995-05-17    1081.0
1995-05-29    1071.0
1995-06-14    1065.0
1995-06-28    1061.0
1995-07-12    1053.0
1995-07-27    1045.0
1995-08-09    1036.0
1995-08-23    1026.0
1995-09-06    1020.0
1995-09-20    1013.0
1995-10-04    1009.0
1995-10-18    1003.0
1995-11-01     999.0
1995-11-15     997.0
1995-12-13     988.0
1996-01-03     983.0
1996-01-17     979.0
1996-01-31     978.0
1996-02-13     978.0
1996-02-28     973.0
1996-03-13     975.0
1996-03-27     973.0
Name: NAP, dtype: float64)
(Timestamp('1996-04-01 00:00:00', offset='AS-APR'), Peildatum
1996-04-10    975.0
1996-04-24    970.0
1996-05-08    970.0
1996-05-22    967.0
1996-06-05    965.0
1996-06-19    963.0
1996-07-03    959.0
1996-07-17    952.0
1996-07-31    954.0
1996-08-14    953.0
1996-08-28    950.0
1996-09-11    948.0
1996-09-25    946.0
1996-10-09    943.0
1996-10-23    941.0
1996-11-06    941.0
1996-11-21    944.0
1996-12-04    942.0
1996-12-18    948.0
1997-01-15    954.0
1997-01-30    952.0
1997-02-12    956.0
1997-03-12    961.0
1997-03-26    964.0
Name: NAP, dtype: float64)
(Timestamp('1997-04-01 00:00:00', offset='AS-APR'), Peildatum
1997-04-09     965.0
1997-04-23     966.0
1997-05-07     971.0
1997-05-21     969.0
1997-06-05     966.0
1997-06-18     963.0
1997-07-03     966.0
1997-07-16     968.0
1997-07-30     970.0
1997-08-13     967.0
1997-08-27     967.0
1997-09-10     965.0
1997-09-24     959.0
1997-10-08     957.0
1997-10-22     945.0
1997-11-05     955.0
1997-11-20     951.0
1997-12-03     946.0
1997-12-17     946.0
1998-01-14     951.0
1998-01-28     955.0
1998-02-11     957.0
1998-02-24    1058.0
1998-02-26     958.0
1998-03-11     963.0
1998-03-25     977.0
Name: NAP, dtype: float64)
(Timestamp('1998-04-01 00:00:00', offset='AS-APR'), Peildatum
1998-04-08     985.0
1998-04-22     986.0
1998-05-06     991.0
1998-05-20     988.0
1998-06-03     987.0
1998-06-17     984.0
1998-07-01     983.0
1998-07-15     979.0
1998-07-29     977.0
1998-08-12     972.0
1998-08-25     969.0
1998-09-09     969.0
1998-09-23     978.0
1998-10-06     983.0
1998-10-21     986.0
1998-11-04    1009.0
1998-11-18    1039.0
1998-12-02    1051.0
1998-12-16    1055.0
1998-12-31    1073.0
1999-01-13    1069.0
1999-01-27    1076.0
1999-02-10    1085.0
1999-03-10    1114.0
1999-03-24    1109.0
Name: NAP, dtype: float64)
(Timestamp('1999-04-01 00:00:00', offset='AS-APR'), Peildatum
1999-04-07    1102.0
1999-04-22    1099.0
1999-05-05    1091.0
1999-05-20    1085.0
1999-06-03    1087.0
1999-06-17    1071.0
1999-06-30    1065.0
1999-07-14    1059.0
1999-07-28    1047.0
1999-08-11    1041.0
1999-08-25    1036.0
1999-09-08    1029.0
1999-09-22    1023.0
1999-10-06    1016.0
1999-10-20    1012.0
1999-11-17    1004.0
1999-12-01    1000.0
1999-12-14     999.0
1999-12-29    1004.0
2000-01-12    1013.0
2000-01-27    1015.0
2000-02-09    1013.0
2000-02-23    1025.0
2000-03-08    1041.0
2000-03-22    1061.0
Name: NAP, dtype: float64)
(Timestamp('2000-04-01 00:00:00', offset='AS-APR'), Peildatum
2000-04-05    1061.0
2000-04-19    1056.0
2000-05-04    1050.0
2000-05-17    1046.0
2000-05-30    1042.0
2000-06-14    1051.0
2000-06-28    1045.0
2000-07-12    1047.0
2000-08-10    1045.0
2000-08-23    1040.0
2000-09-06    1037.0
2000-09-20    1045.0
2000-10-04    1029.0
2000-10-18    1031.0
2000-11-01    1029.0
2000-11-15    1028.0
2000-11-30    1031.0
2000-12-12    1039.0
2001-01-10    1051.0
2001-02-06    1080.0
2001-02-21    1078.0
2001-03-07    1081.0
Name: NAP, dtype: float64)
(Timestamp('2001-04-01 00:00:00', offset='AS-APR'), Peildatum
2001-05-15    1091.0
2001-05-29    1079.0
2001-06-13    1071.0
2001-06-28    1063.0
2001-07-11    1052.0
2001-08-08    1047.0
2001-08-23    1046.0
2001-09-05    1026.0
2001-09-19    1026.0
2001-10-03    1037.0
2001-10-24    1051.0
2001-11-07    1051.0
2001-11-21    1049.0
2001-12-05    1054.0
2002-01-09    1080.0
2002-01-23    1077.0
2002-02-05    1099.0
2002-02-20    1111.0
2002-03-20    1126.0
Name: NAP, dtype: float64)
(Timestamp('2002-04-01 00:00:00', offset='AS-APR'), Peildatum
2002-04-03    1131.0
2002-04-17    1126.0
2002-05-01    1111.0
2002-05-14    1096.0
2002-05-29    1086.0
2002-06-13    1076.0
2002-06-26    1071.0
2002-07-10    1060.0
2002-08-07    1043.0
2002-08-21    1041.0
2002-09-04    1036.0
2002-09-18    1030.0
2002-10-03    1023.0
2002-10-16    1018.0
2002-11-13    1021.0
2002-11-27    1016.0
2002-12-11    1006.0
2003-01-22    1066.0
2003-02-06    1063.0
2003-02-20    1061.0
2003-03-05    1061.0
2003-03-20    1046.0
Name: NAP, dtype: float64)
(Timestamp('2003-04-01 00:00:00', offset='AS-APR'), Peildatum
2003-04-02    1056.0
2003-04-18    1051.0
2003-04-29    1063.0
2003-05-14    1049.0
2003-05-27    1046.0
2003-06-11    1046.0
2003-06-25    1041.0
2003-07-09    1035.0
2003-07-23    1021.0
2003-09-03    1007.0
2003-09-17    1006.0
2003-10-01     998.0
2003-10-15     988.0
2003-10-29     983.0
2003-11-13     994.0
2003-11-26     986.0
2003-12-10     987.0
2003-12-22     986.0
2004-01-15     996.0
2004-01-28    1003.0
2004-02-11    1003.0
2004-02-26    1031.0
2004-03-11    1041.0
2004-03-24    1036.0
Name: NAP, dtype: float64)
(Timestamp('2004-04-01 00:00:00', offset='AS-APR'), Peildatum
2004-04-07    1039.0
2004-05-04    1036.0
2004-05-19    1031.0
2004-06-02    1026.0
2004-06-16    1021.0
2004-06-30    1016.0
2004-07-15    1011.0
2004-08-29    1006.0
2004-09-08     999.0
2004-09-22     991.0
2004-10-06     991.0
2004-10-20     986.0
2004-11-03     981.0
2004-11-18     976.0
2004-12-01     977.0
2004-12-15     981.0
2004-12-29     983.0
2005-01-13     986.0
2005-01-26     990.0
2005-02-08     996.0
2005-02-24    1016.0
2005-03-09    1026.0
2005-03-24    1029.0
Name: NAP, dtype: float64)
(Timestamp('2005-04-01 00:00:00', offset='AS-APR'), Peildatum
2005-04-06    1028.0
2005-04-20    1030.0
2005-05-03    1026.0
2005-05-18    1036.0
2005-06-01    1031.0
2005-06-15    1031.0
2005-06-29    1021.0
2005-07-13    1016.0
2005-08-11    1010.0
2005-08-25    1009.0
2005-09-07    1002.0
2005-09-21    1003.0
2005-10-05    1000.0
2005-10-19     959.0
2005-11-16     995.0
2005-12-02     994.0
2005-12-13    1000.0
2006-01-16    1010.0
2006-01-27    1010.0
2006-02-16    1014.0
2006-03-03    1020.0
2006-03-20    1029.0
2006-03-31    1033.0
Name: NAP, dtype: float64)
(Timestamp('2006-04-01 00:00:00', offset='AS-APR'), Peildatum
2006-04-14    1035.0
2006-04-27    1034.0
2006-05-16    1030.0
2006-05-30    1030.0
2006-06-15    1026.0
2006-06-29    1020.0
2006-07-14    1014.0
2006-07-27    1010.0
2006-08-11    1006.0
2006-08-25    1010.0
2006-09-15    1017.0
2006-09-29    1010.0
2006-10-13    1008.0
2006-10-27    1006.0
2006-11-17    1003.0
2006-11-30    1000.0
2006-12-14    1006.0
2006-12-28    1013.0
2007-01-16    1021.0
2007-01-26    1032.0
2007-02-14    1044.0
2007-02-27    1052.0
2007-03-14    1067.0
2007-03-29    1075.0
Name: NAP, dtype: float64)
(Timestamp('2007-04-01 00:00:00', offset='AS-APR'), Peildatum
2007-04-13    1069.0
2007-04-26    1065.0
2007-05-16    1054.0
2007-05-29    1049.0
2007-06-13    1047.0
2007-06-26    1046.0
2007-07-16    1045.0
2007-07-27    1041.0
2007-08-14    1038.0
2007-08-28    1034.0
2007-09-13    1029.0
2007-09-26    1026.0
2007-10-15    1022.0
2007-10-30    1019.0
2007-11-16    1014.0
2007-11-27    1012.0
2007-12-12    1015.0
2007-12-27    1020.0
2008-01-15    1026.0
2008-01-30    1028.0
2008-02-13    1036.0
2008-02-25    1043.0
2008-03-12    1045.0
2008-03-26    1049.0
Name: NAP, dtype: float64)
(Timestamp('2008-04-01 00:00:00', offset='AS-APR'), Peildatum
2008-04-16    1055.0
2008-04-25    1053.0
2008-05-15    1049.0
2008-05-29    1043.0
2008-06-16    1036.0
2008-06-30    1029.0
2008-07-17    1024.0
2008-07-28       NaN
2008-08-14    1019.0
2008-08-25    1018.0
2008-09-16    1013.0
2008-10-01    1013.0
2008-10-14    1007.0
2008-10-29    1006.0
2008-11-12    1002.0
2008-11-26    1001.0
2008-12-16    1002.0
2008-12-29    1001.0
2009-01-15    1001.0
2009-01-27    1000.0
2009-02-13    1004.0
2009-02-27    1015.0
2009-03-17    1019.0
2009-03-31    1021.0
Name: NAP, dtype: float64)
(Timestamp('2009-04-01 00:00:00', offset='AS-APR'), Peildatum
2009-04-17    1030.0
2009-04-29    1031.0
2009-05-14    1031.0
2009-05-27    1023.0
2009-06-15    1019.0
2009-06-24    1016.0
2009-07-15    1000.0
2009-07-31    1005.0
2009-08-12    1001.0
2009-08-28     997.0
2009-09-16     991.0
2009-09-29     987.0
2009-10-15     984.0
2009-10-28     982.0
2009-11-16     981.0
2009-11-27     984.0
2009-12-16     993.0
2009-12-30    1000.0
2010-01-13    1012.0
2010-01-29    1018.0
2010-02-11    1024.0
2010-03-02    1031.0
2010-03-16    1040.0
2010-03-30    1048.0
Name: NAP, dtype: float64)
(Timestamp('2010-04-01 00:00:00', offset='AS-APR'), Peildatum
2010-04-16    1053.0
2010-04-29    1053.0
2010-05-12    1050.0
2010-05-28    1046.0
2010-06-16    1038.0
2010-06-30    1031.0
2010-07-14    1023.0
2010-07-28    1016.0
2010-08-18    1010.0
2010-08-31    1008.0
2010-09-14    1007.0
2010-09-28    1008.0
2010-10-14    1008.0
2010-10-29    1010.0
2010-11-18    1038.0
2010-11-30    1057.0
2010-12-17    1056.0
2010-12-29    1055.0
2011-01-11    1059.0
2011-01-27    1080.0
2011-02-16    1084.0
2011-03-01    1082.0
2011-03-16    1090.0
2011-03-28    1085.0
Name: NAP, dtype: float64)
(Timestamp('2011-04-01 00:00:00', offset='AS-APR'), Peildatum
2011-04-14    1076.0
2011-04-26    1070.0
2011-05-10    1060.0
2011-05-11    1060.0
2011-05-12    1060.0
2011-05-13    1059.0
2011-05-14    1059.0
2011-05-15    1058.0
2011-05-16    1057.0
2011-05-17    1057.0
2011-05-18    1056.0
2011-05-19    1055.0
2011-05-20    1055.0
2011-05-21    1054.0
2011-05-22    1054.0
2011-05-23    1053.0
2011-05-24    1052.0
2011-05-25    1052.0
2011-05-26    1054.0
2011-05-27    1051.0
2011-05-28    1051.0
2011-05-29    1050.0
2011-05-30    1050.0
2011-05-31    1048.0
2011-06-01    1047.0
2011-06-02    1046.0
2011-06-03    1046.0
2011-06-04    1047.0
2011-06-05    1047.0
2011-06-06    1047.0
               ...  
2012-03-02    1063.0
2012-03-03    1064.0
2012-03-04    1064.0
2012-03-05    1065.0
2012-03-06    1062.0
2012-03-07    1064.0
2012-03-08    1060.0
2012-03-09    1060.0
2012-03-10    1060.0
2012-03-11    1061.0
2012-03-12    1062.0
2012-03-13    1061.0
2012-03-14    1062.0
2012-03-15    1063.0
2012-03-16    1064.0
2012-03-17    1065.0
2012-03-18    1065.0
2012-03-19    1061.0
2012-03-20    1060.0
2012-03-21    1060.0
2012-03-22    1061.0
2012-03-23    1062.0
2012-03-24    1061.0
2012-03-25    1060.0
2012-03-26    1059.0
2012-03-27    1059.0
2012-03-28    1060.0
2012-03-29    1061.0
2012-03-30    1061.0
2012-03-31    1062.0
Name: NAP, dtype: float64)
(Timestamp('2012-04-01 00:00:00', offset='AS-APR'), Peildatum
2012-04-01    1060.0
2012-04-02    1062.0
2012-04-03    1063.0
2012-04-04    1061.0
2012-04-05    1058.0
2012-04-06    1059.0
2012-04-07    1059.0
2012-04-08    1057.0
2012-04-09    1061.0
2012-04-10    1061.0
2012-04-11    1059.0
2012-04-12    1058.0
2012-04-13    1057.0
2012-04-14    1057.0
2012-04-15    1055.0
2012-04-16    1054.0
2012-04-17    1058.0
2012-04-18    1061.0
2012-04-19    1060.0
2012-04-20    1057.0
2012-04-21    1056.0
2012-04-22    1054.0
2012-04-23    1055.0
2012-04-24    1056.0
2012-04-25    1055.0
2012-04-26    1054.0
2012-04-27    1052.0
2012-04-28    1053.0
2012-04-29    1054.0
2012-04-30    1051.0
               ...  
2013-03-02    1058.0
2013-03-03    1059.0
2013-03-04    1060.0
2013-03-05    1060.0
2013-03-06    1061.0
2013-03-07    1061.0
2013-03-08    1061.0
2013-03-09    1060.0
2013-03-10    1059.0
2013-03-11    1058.0
2013-03-12    1059.0
2013-03-13    1059.0
2013-03-14    1057.0
2013-03-15    1058.0
2013-03-16    1059.0
2013-03-17    1061.0
2013-03-18    1061.0
2013-03-19    1059.0
2013-03-20    1058.0
2013-03-21    1054.0
2013-03-22    1056.0
2013-03-23    1056.0
2013-03-24    1056.0
2013-03-25    1056.0
2013-03-26    1056.0
2013-03-27    1056.0
2013-03-28    1057.0
2013-03-29    1056.0
2013-03-30    1056.0
2013-03-31    1055.0
Name: NAP, dtype: float64)
(Timestamp('2013-04-01 00:00:00', offset='AS-APR'), Peildatum
2013-04-01    1055.0
2013-04-02    1055.0
2013-04-03    1054.0
2013-04-04    1055.0
2013-04-05    1054.0
2013-04-06    1052.0
2013-04-07    1052.0
2013-04-08    1056.0
2013-04-09    1057.0
2013-04-10    1054.0
2013-04-11    1055.0
2013-04-12    1055.0
2013-04-13    1051.0
2013-04-14    1050.0
2013-04-15    1050.0
2013-04-16    1051.0
2013-04-17    1051.0
2013-04-18    1051.0
2013-04-19    1049.0
2013-04-20    1048.0
2013-04-21    1051.0
2013-04-22    1051.0
2013-04-23    1049.0
2013-04-24    1048.0
2013-04-25    1048.0
2013-04-26    1051.0
2013-04-27    1050.0
2013-04-28    1048.0
2013-04-29    1048.0
2013-04-30    1047.0
               ...  
2014-03-02    1061.0
2014-03-03    1062.0
2014-03-04    1060.0
2014-03-05    1056.0
2014-03-06    1056.0
2014-03-07    1057.0
2014-03-08    1057.0
2014-03-09    1058.0
2014-03-10    1059.0
2014-03-11    1057.0
2014-03-12    1057.0
2014-03-13    1058.0
2014-03-14    1058.0
2014-03-15    1059.0
2014-03-16    1060.0
2014-03-17    1059.0
2014-03-18    1060.0
2014-03-19    1058.0
2014-03-20    1060.0
2014-03-21    1060.0
2014-03-22    1061.0
2014-03-23    1059.0
2014-03-24    1057.0
2014-03-25    1058.0
2014-03-26    1056.0
2014-03-27    1057.0
2014-03-28    1056.0
2014-03-29    1055.0
2014-03-30    1055.0
2014-03-31    1055.0
Name: NAP, dtype: float64)
(Timestamp('2014-04-01 00:00:00', offset='AS-APR'), Peildatum
2014-04-01    1055.0
2014-04-02    1056.0
2014-04-03    1056.0
2014-04-04    1054.0
2014-04-05    1052.0
2014-04-06    1052.0
2014-04-07    1052.0
2014-04-08    1051.0
2014-04-09    1049.0
2014-04-10    1050.0
2014-04-11    1049.0
2014-04-12    1049.0
2014-04-13    1049.0
2014-04-14    1048.0
2014-04-15    1047.0
2014-04-16    1047.0
2014-04-17    1048.0
2014-04-18    1047.0
2014-04-19    1046.0
2014-04-20    1049.0
2014-04-21    1048.0
2014-04-22    1046.0
2014-04-23    1045.0
2014-04-24    1044.0
2014-04-25    1046.0
2014-04-26    1045.0
2014-04-27    1044.0
2014-04-28    1044.0
2014-04-29    1043.0
2014-04-30    1043.0
               ...  
2015-03-02    1082.0
2015-03-03    1080.0
2015-03-04    1078.0
2015-03-05    1076.0
2015-03-06    1078.0
2015-03-07    1080.0
2015-03-08    1081.0
2015-03-09    1080.0
2015-03-10    1080.0
2015-03-11    1080.0
2015-03-12    1080.0
2015-03-13    1081.0
2015-03-14    1080.0
2015-03-15    1080.0
2015-03-16    1081.0
2015-03-17    1079.0
2015-03-18    1078.0
2015-03-19    1078.0
2015-03-20    1078.0
2015-03-21    1080.0
2015-03-22    1077.0
2015-03-23    1079.0
2015-03-24    1081.0
2015-03-25    1080.0
2015-03-26    1080.0
2015-03-27    1076.0
2015-03-28    1077.0
2015-03-29    1081.0
2015-03-30    1078.0
2015-03-31    1080.0
Name: NAP, dtype: float64)
(Timestamp('2015-04-01 00:00:00', offset='AS-APR'), Peildatum
2015-04-01    1077.0
2015-04-02    1078.0
2015-04-03    1079.0
2015-04-04    1079.0
2015-04-05    1079.0
2015-04-06    1079.0
2015-04-07    1079.0
2015-04-08    1080.0
2015-04-09    1082.0
2015-04-10    1084.0
2015-04-11    1084.0
2015-04-12    1082.0
2015-04-13    1081.0
2015-07-16    1036.0
2015-07-17    1036.0
2015-07-18    1035.0
2015-07-19    1035.0
2015-07-20    1034.0
2015-07-21    1034.0
2015-07-22    1033.0
2015-07-23    1033.0
2015-07-24    1033.0
2015-07-25    1033.0
2015-07-26    1033.0
2015-07-27    1034.0
2015-07-28    1031.0
2015-07-29    1031.0
2015-07-30    1030.0
2015-07-31    1029.0
2015-08-01    1029.0
               ...  
2016-03-02    1091.0
2016-03-03    1090.0
2016-03-04    1093.0
2016-03-05    1093.0
2016-03-06    1094.0
2016-03-07    1095.0
2016-03-08    1094.0
2016-03-09    1097.0
2016-03-10    1094.0
2016-03-11    1095.0
2016-03-12    1096.0
2016-03-13    1096.0
2016-03-14    1097.0
2016-03-15    1098.0
2016-03-16    1098.0
2016-03-17    1099.0
2016-03-18    1099.0
2016-03-19    1099.0
2016-03-20    1099.0
2016-03-21    1099.0
2016-03-22    1099.0
2016-03-23    1099.0
2016-03-24    1098.0
2016-03-25    1099.0
2016-03-26    1098.0
2016-03-27    1098.0
2016-03-28    1101.0
2016-03-29    1096.0
2016-03-30    1096.0
2016-03-31    1094.0
Name: NAP, dtype: float64)
(Timestamp('2016-04-01 00:00:00', offset='AS-APR'), Peildatum
2016-04-01    1092.0
2016-04-02    1093.0
2016-04-03    1095.0
2016-04-04    1095.0
2016-04-05    1094.0
2016-04-06    1093.0
2016-04-07    1093.0
2016-04-08    1092.0
2016-04-09    1093.0
2016-04-10    1091.0
2016-04-11    1091.0
2016-04-12    1091.0
2016-07-18    1070.0
2016-07-19    1070.0
2016-07-20    1071.0
2016-07-21    1069.0
2016-07-22    1068.0
2016-07-23    1067.0
2016-07-24    1067.0
2016-07-25    1066.0
2016-07-26    1065.0
2016-07-27    1065.0
2016-07-28    1065.0
2016-07-29    1065.0
2016-07-30    1064.0
2016-07-31    1063.0
2016-08-01    1062.0
2016-08-02    1063.0
2016-08-03    1063.0
2016-08-04    1063.0
               ...  
2016-09-14    1042.0
2016-09-15    1042.0
2016-09-16    1041.0
2016-09-17    1040.0
2016-09-18    1039.0
2016-09-19    1039.0
2016-09-20    1039.0
2016-09-21    1038.0
2016-09-22    1037.0
2016-09-23    1037.0
2016-09-24    1037.0
2016-09-25    1037.0
2016-09-26    1035.0
2016-09-27    1035.0
2016-09-28    1034.0
2016-09-29    1035.0
2016-09-30    1034.0
2016-10-01    1034.0
2016-10-02    1033.0
2016-10-03    1031.0
2016-10-04    1031.0
2016-10-05    1030.0
2016-10-06    1031.0
2016-10-07    1030.0
2016-10-08    1029.0
2016-10-09    1029.0
2016-10-10    1029.0
2016-10-11    1029.0
2016-10-12    1028.0
2016-10-13    1029.0
Name: NAP, dtype: float64)

Insteresting is the agg function (which is an abbreviation of aggregate function). Here is its documentation:


In [128]:
print(Z.agg.__doc__)


        Apply aggregation function or functions to resampled groups, yielding
        most likely Series but in some cases DataFrame depending on the output
        of the aggregation function

        Parameters
        ----------
        func_or_funcs : function or list / dict of functions
            List/dict of functions will produce DataFrame with column names
            determined by the function names themselves (list) or the keys in
            the dict

        Notes
        -----
        agg is an alias for aggregate. Use it.

        Examples
        --------
        >>> s = Series([1,2,3,4,5],
                        index=pd.date_range('20130101',
                                            periods=5,freq='s'))
        2013-01-01 00:00:00    1
        2013-01-01 00:00:01    2
        2013-01-01 00:00:02    3
        2013-01-01 00:00:03    4
        2013-01-01 00:00:04    5
        Freq: S, dtype: int64

        >>> r = s.resample('2s')
        DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left,
                                label=left, convention=start, base=0]

        >>> r.agg(np.sum)
        2013-01-01 00:00:00    3
        2013-01-01 00:00:02    7
        2013-01-01 00:00:04    5
        Freq: 2S, dtype: int64

        >>> r.agg(['sum','mean','max'])
                             sum  mean  max
        2013-01-01 00:00:00    3   1.5    2
        2013-01-01 00:00:02    7   3.5    4
        2013-01-01 00:00:04    5   5.0    5

        >>> r.agg({'result' : lambda x: x.mean() / x.std(),
                   'total' : np.sum})
                             total    result
        2013-01-01 00:00:00      3  2.121320
        2013-01-01 00:00:02      7  4.949747
        2013-01-01 00:00:04      5       NaN

        See also
        --------
        transform

        Returns
        -------
        Series or DataFrame
        

So we can use any function and apply it on the data grouped by the resampler. These data are the time series consisting of the data that fall in the interval between the last resample moment and the currrent one. Let's try this out to get the three highest values of any hydrological year. For this we define our own function, called highest3.

It works by taking z which should be a time series, one consisting of any of the hydrological years in your long-year time series. We use argsort to the indices of the ordered values (we could also directly use the values themselves, but it's good to know argsort exists). The series is sorted from low to high, so we take the last 3 values, i.e. the highest 3. Notice that this also works in Python of the total number of values is less than three, so we don't need to check this. Then we return the mean of these highest three values. That's all.


In [129]:
def highest3(z):
    """returns mean of highest 3 values using np.argsort"""
    I = np.argsort(z)[-3:]
    return z[I].mean()

def highest3a(z):
    """returns mean of highest 3 values using np.sort"""
    z = np.sort(z)
    return z[-3:].mean()

In [130]:
# Apply
print("Using np.argsort")
highest = pb.resample("AS-APR").agg(highest3)
highest.columns = ["mean_highest_value"]
print(highest.head())

print("\nUsing np.sort")
highesta = pb.resample("AS-APR").agg(highest3a)
highesta.columns = ["mean_highest_value"]
print(highesta.head())


Using np.argsort
            mean_highest_value
Peildatum                     
1954-04-01          729.500000
1955-04-01          758.333333
1956-04-01          724.000000
1957-04-01          776.000000
1958-04-01          805.000000

Using np.sort
            mean_highest_value
Peildatum                     
1954-04-01          729.500000
1955-04-01          758.333333
1956-04-01          724.000000
1957-04-01          776.000000
1958-04-01          805.000000

This, of course, solves the problem. Which means we could just as well also compute the lowest 3 at the same time.

And why not also remember the highest and lowest 3


In [131]:
def h_and_l_3(z):
    z = np.sort(z)
    # rounding off for a nicer list, but is not necessary
    return (np.round(z[ :3].mean()),
            np.round(z[-3:].mean()))

# Apply
h_and_l = pb.resample("AS-APR").agg(h_and_l_3)
h_and_l.columns = ["mean_lowest_and_highest_values"]
h_and_l.head()


Out[131]:
mean_lowest_and_highest_values
Peildatum
1954-04-01 (730.0, 730.0)
1955-04-01 (609.0, 758.0)
1956-04-01 (676.0, 724.0)
1957-04-01 (681.0, 776.0)
1958-04-01 (712.0, 805.0)

The above functions all reduce, that is, they all aggreate the data held by the resampler for each sampling interval to a single value (or a tuple)


In [143]:
def h3(z):
    """Returns a tuple of the three highest value within sampling interval"""
    return (z[np.argsort(z)[-3:]],)

Z.agg(h3).head()


Out[143]:
Stand (cm t.o.v. NAP)
Peildatum
1954-04-01 ([726.0, 733.0],)
1955-04-01 ([750.0, 751.0, 774.0],)
1956-04-01 ([723.0, 723.0, 726.0],)
1957-04-01 ([741.0, 771.0, 816.0],)
1958-04-01 ([797.0, 798.0, 820.0],)

This does indeed give a tuple of the three highest values within each sampling interval, but we can't plot these values easily on the graph of the time series.

Other functionality of the sampler are the indices, i.e. Z.indices. This yields a dictionary with the indices into the overall time series that belong to each resampled timestamp. Therefore we can readily find the values that belong to each hydrological year.


In [159]:
Z.apply(h3).head()


Out[159]:
Stand (cm t.o.v. NAP)
Peildatum
1954-04-01 ([726.0, 733.0],)
1955-04-01 ([750.0, 751.0, 774.0],)
1956-04-01 ([723.0, 723.0, 726.0],)
1957-04-01 ([741.0, 771.0, 816.0],)
1958-04-01 ([797.0, 798.0, 820.0],)

So appy() works the same as agg() at least here.

If we want to plot the three highest points in each hydrlogical year, we could make a list with the sub time series that consist of the three highest points with there timestamp as index. Then, each item in this list is a series consisting of three values, which we may plot one after the other.


In [178]:
dd = list()
def h33(z):
    """Returns a tuple of the three highest value within sampling interval"""
    #print(type(z))
    dd.append(z[z.argsort()[-3:]])
    return

# the time series are put in the list dd
Z.apply(h33).head()
#Z.agg(h33).head()  # alternative works just as well

# for instance show dd[3]
print(type(dd[3]))
dd[3]


<class 'pandas.core.series.Series'>
Out[178]:
1957-04-14    741.0
1958-02-16    771.0
1958-03-16    816.0
Name: Stand (cm t.o.v. NAP), dtype: float64

The next step is to plot them. But we first plot the entire data set as a line. Then we plot each sub time series as small circles. The adjacent hydrological years then have a different color.


In [177]:
pb.plot()   # plot all data as a line
for d in dd:
    #plot sub time series of the three highest points
    d.plot(marker='o')
plt.show()


If we want to color the data in the same hydrological year in the same color, then we also make a list of all data in each sampling interval next to the list of the three highest values. Each item in dd has the complete time series of the interval, each item in dd3 has a tiem series of the three highest values alone.

The append within the function is away of using a side-effect to get things done. It's a bit sneaky, not very elegant. But it works:


In [323]:
dd


Out[323]:
[1955-03-11    726.0
 1955-03-23    733.0
 Name: Stand (cm t.o.v. NAP), dtype: float64, Peildatum
 1955-03-11    726.0
 1955-03-23    733.0
 Name: 1954-04-01 00:00:00, dtype: float64,             Stand (cm t.o.v. NAP)
 Peildatum                        
 1955-03-11                  726.0
 1955-03-23                  733.0, 1955-03-11    726.0
 1955-03-23    733.0
 Name: Stand (cm t.o.v. NAP), dtype: float64, Peildatum
 1955-03-11    726.0
 1955-03-23    733.0
 Name: 1954-04-01 00:00:00, dtype: float64,             Stand (cm t.o.v. NAP)
 Peildatum                        
 1955-03-11                  726.0
 1955-03-23                  733.0,             Stand (cm t.o.v. NAP)
 Peildatum                        
 1955-03-11                  726.0
 1955-03-23                  733.0]

In [333]:
dd  = list() # the entire time series in each sampling interval.
dd3 = list() # only the three highest values in each sampling interval.
def h33(z):
    """Returns a tuple of the three highest value within sampling interval
    
    Notice that this function just used append() to generate a list as a side-effect.
    It effectively consists of two lines and returns nothing.
    """
    
    # z is what the sampler Z yields while resampling the original time series
    # It isthe sub-time series that falls in the running interval.
    # With tis append we get a list of the sub time series.
    dd.append(z[:])  # z[:] forces a copy
    
    # you can do an argsort on z. This yields a time series with the same index
    # but with as values the index in the original series. You can see it if
    # you print it here or make a list of these index-time series.
    dd3.append(z[z.argsort()[-3:]])
    return

# Here we apply the function by calling the method .agg() of the sampler Z.
# The method receives the just created function as input. It applies this function
# on every iteration, that is on every sub-time series.
# Each time the function h33 is called it appends to the  lists dd and ddr.
# The sampler Z method agg calls the funcion h33 for every sample interval.
# You may be tempted to insert a print statement in the function to see that
# this is what actually happens.
Z.apply(h33)

# Then plot the sub-time series in the lists in dd and dd3.
# We make sure to use the same color for all points in the same
# hydrological year in both dd and dd3.
# The subseries in dd are plotted as a line, those in dd3 as small circles.

clr = 'brgkmcy'; i=0 # colors to use
for d3, d  in zip(dd3, dd):
    d.plot(marker='.', color=clr[i]) # all data in hydrological year
    d3.plot(marker='o', color=clr[i]) # highest three
    i += 1
    if i==len(clr): i=0 # set i to 0 when colors are exhausted.
plt.title("measurements per hydr. yr with the 3 highest accentuated")
plt.xlabel('time')
plt.ylabel('cm above national datum NAP')
plt.show()


Show that the argort works to get the indices that sort the time series


In [338]:
print("The sub time series for 1964\n")
print(dd[10])
print("\nThe indices that sort this sub series. It is itself a time series")
dd[10].argsort()


The sub time series for 1964

1964-04-12    690.0
1964-05-10    690.0
1964-06-14    679.0
1964-07-14    696.0
1964-08-16    737.0
1964-09-13    706.0
1965-03-17    764.0
Name: Stand (cm t.o.v. NAP), dtype: float64

The indices that sort this sub series. It is itself a time series
Out[338]:
1964-04-12    2
1964-05-10    0
1964-06-14    1
1964-07-14    3
1964-08-16    5
1964-09-13    4
1965-03-17    6
Name: Stand (cm t.o.v. NAP), dtype: int64

Instead of appending to the list dd and dd3 sneakyly behind the scene (hidden inside the function, that is as a side effect of the function), we can also aim in achieveing the same thing head-on. This can be done using the indices of each sub-timeseries, which is also a functionality of the sample.


In [310]:
# Don't need this, but just to make sure refresh our sampler
Z = pb.resample("AS-APR")

The resampler object Z also has a method indices, which yields a dictionary with the indices of the values that fall in each sampling interval. The indices are the absolute indices, i.e. they point into the large, original time series.

Let's see how this works.

First generate the dictionary.


In [186]:
Idict = Z.indices
type(Idict)


Out[186]:
collections.defaultdict

A dict has keys. So let's show one item in this dict like so:


In [194]:
pb.ix[3]


Out[194]:
Stand (cm t.o.v. NAP)    741.0
Name: 1955-04-22 00:00:00, dtype: float64

In [196]:
# Show the indices for one of the keys of the Idict
for k in Idict.keys():
    print(k)  # the key
    print()
    print(Idict[k]) # the indices
    print()
    print(pb.ix[Idict[k]]) # the values beloning to these indices
    break


1983-04-01 00:00:00

[585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611]

            Stand (cm t.o.v. NAP)
Peildatum                        
1983-04-06                 1095.0
1983-04-20                 1098.0
1983-05-04                 1105.0
1983-05-18                 1116.0
1983-06-01                 1119.0
1983-06-15                 1106.0
1983-06-29                 1097.0
1983-07-13                 1083.0
1983-07-27                 1073.0
1983-08-02                 1067.0
1983-08-10                 1067.0
1983-08-24                 1058.0
1983-09-07                 1055.0
1983-09-21                 1049.0
1983-10-05                 1045.0
1983-10-19                  941.0
1983-11-02                 1039.0
1983-11-11                 1038.0
1983-11-30                 1146.0
1983-12-14                 1039.0
1983-12-28                 1041.0
1984-01-11                 1048.0
1984-01-25                 1058.0
1984-02-08                 1081.0
1984-02-22                 1097.0
1984-03-07                 1089.0
1984-03-21                 1081.0

This implies that we can now plot each sub time series like so:

To plot them together with the boundaries of each hydrological year, we first plot the data as a colored line, within each hydrological year. Then we plot the vertical lines that separate the hydrological years. The lines are colored light grey using color=[R, G, B] where R, G and B are all 0.8. ax=get_ylim() gets the extremes of the vertical axis, which are then used to draw the vertical lines.


In [258]:
I


Out[258]:
array([585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597,
       598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611])

In [309]:
# Show the indices for one of the keys of the Idict
fig, ax = plt.subplots()
clr = "brgkmcy"; i=0
Idict = Z.indices
for k in Idict.keys():
    I = Idict[k] # The indices belonging to this key k
    ax.plot(pb.ix[I].index, pb.ix[I].values, color=clr[i])
    
    # The values have dimension [1,n] so use values[0] to get a 1D array of indices
    J = np.argsort(pb.ix[I].values[0])[-3:]
    
    # Need a comprehension to get the indexes because
    # indexing like I[J] is not allowed for lists
    Idx = [I[j] for j in J]
    
    ax.plot(pb.index[Idx], pb.values[Idx], color=clr[i], marker='o')
    i += 1;
    if i==len(clr): i=0

# plot the hydrological year boundaries as vertical grey lines
ylim = ax.get_ylim()
for k in Idict.keys():
    i = Idict[k][-1]
    ax.plot(pb.index[[i, i]], ylim, color=[0.8, 0.8, 0.8])

plt.show()
#pb.ix[I].plot(ax=ax) # the values beloning to these indices (can't omit the legend)


That's it.

Here's a referene of a nice so-called cheat-sheet where someone of Idaho University as figured out how Pandas werkt and put this in a consice overview.

http://www.webpages.uidaho.edu/~stevel/504/Pandas%20DataFrame%20Notes.pdf


In [ ]: