In [1]:
%load_ext rpy2.ipython
%matplotlib inline
from fbprophet import Prophet
import pandas as pd
from matplotlib import pyplot as plt
import logging
logging.getLogger('fbprophet').setLevel(logging.ERROR)
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')
m = Prophet()
m.fit(df)
future = m.make_future_dataframe(periods=366)

In [2]:
%%R
library(prophet)
df <- read.csv('../examples/example_wp_log_peyton_manning.csv')
m <- prophet(df)
future <- make_future_dataframe(m, periods=366)


WARNING:rpy2.rinterface_lib.callbacks:R[write to console]: Loading required package: Rcpp

WARNING:rpy2.rinterface_lib.callbacks:R[write to console]: Loading required package: rlang

WARNING:rpy2.rinterface_lib.callbacks:R[write to console]: Disabling daily seasonality. Run prophet with daily.seasonality=TRUE to override this.

Prophet includes functionality for time series cross validation to measure forecast error using historical data. This is done by selecting cutoff points in the history, and for each of them fitting the model using data only up to that cutoff point. We can then compare the forecasted values to the actual values. This figure illustrates a simulated historical forecast on the Peyton Manning dataset, where the model was fit to a initial history of 5 years, and a forecast was made on a one year horizon.


In [3]:
from fbprophet.diagnostics import cross_validation
df_cv = cross_validation(
    m, '365 days', initial='1825 days', period='365 days')
cutoff = df_cv['cutoff'].unique()[0]
df_cv = df_cv[df_cv['cutoff'].values == cutoff]

fig = plt.figure(facecolor='w', figsize=(10, 6))
ax = fig.add_subplot(111)
ax.plot(m.history['ds'].values, m.history['y'], 'k.')
ax.plot(df_cv['ds'].values, df_cv['yhat'], ls='-', c='#0072B2')
ax.fill_between(df_cv['ds'].values, df_cv['yhat_lower'],
                df_cv['yhat_upper'], color='#0072B2',
                alpha=0.2)
ax.axvline(x=pd.to_datetime(cutoff), c='gray', lw=4, alpha=0.5)
ax.set_ylabel('y')
ax.set_xlabel('ds')
ax.text(x=pd.to_datetime('2010-01-01'),y=12, s='Initial', color='black',
       fontsize=16, fontweight='bold', alpha=0.8)
ax.text(x=pd.to_datetime('2012-08-01'),y=12, s='Cutoff', color='black',
       fontsize=16, fontweight='bold', alpha=0.8)
ax.axvline(x=pd.to_datetime(cutoff) + pd.Timedelta('365 days'), c='gray', lw=4,
           alpha=0.5, ls='--')
ax.text(x=pd.to_datetime('2013-01-01'),y=6, s='Horizon', color='black',
       fontsize=16, fontweight='bold', alpha=0.8);


The Prophet paper gives further description of simulated historical forecasts.

This cross validation procedure can be done automatically for a range of historical cutoffs using the cross_validation function. We specify the forecast horizon (horizon), and then optionally the size of the initial training period (initial) and the spacing between cutoff dates (period). By default, the initial training period is set to three times the horizon, and cutoffs are made every half a horizon.

The output of cross_validation is a dataframe with the true values y and the out-of-sample forecast values yhat, at each simulated forecast date and for each cutoff date. In particular, a forecast is made for every observed point between cutoff and cutoff + horizon. This dataframe can then be used to compute error measures of yhat vs. y.

Here we do cross-validation to assess prediction performance on a horizon of 365 days, starting with 730 days of training data in the first cutoff and then making predictions every 180 days. On this 8 year time series, this corresponds to 11 total forecasts.


In [4]:
%%R
df.cv <- cross_validation(m, initial = 730, period = 180, horizon = 365, units = 'days')
head(df.cv)


WARNING:rpy2.rinterface_lib.callbacks:R[write to console]: Making 11 forecasts with cutoffs between 2010-02-15 and 2015-01-20

          ds        y     yhat yhat_lower yhat_upper     cutoff
1 2010-02-16 8.242493 8.954992   8.423614   9.496403 2010-02-15
2 2010-02-17 8.008033 8.721365   8.226481   9.219106 2010-02-15
3 2010-02-18 8.045268 8.605072   8.103985   9.104483 2010-02-15
4 2010-02-19 7.928766 8.526855   8.023088   9.042035 2010-02-15
5 2010-02-20 7.745003 8.268741   7.757920   8.779416 2010-02-15
6 2010-02-21 7.866339 8.599935   8.084956   9.060284 2010-02-15

In [5]:
from fbprophet.diagnostics import cross_validation
df_cv = cross_validation(m, initial='730 days', period='180 days', horizon = '365 days')
df_cv.head()


Out[5]:
ds yhat yhat_lower yhat_upper y cutoff
0 2010-02-16 8.956572 8.460049 9.460400 8.242493 2010-02-15
1 2010-02-17 8.723004 8.200557 9.236561 8.008033 2010-02-15
2 2010-02-18 8.606823 8.070835 9.123754 8.045268 2010-02-15
3 2010-02-19 8.528688 8.034782 9.042712 7.928766 2010-02-15
4 2010-02-20 8.270706 7.754891 8.739012 7.745003 2010-02-15

In R, the argument units must be a type accepted by as.difftime, which is weeks or shorter. In Python, the string for initial, period, and horizon should be in the format used by Pandas Timedelta, which accepts units of days or shorter.

The performance_metrics utility can be used to compute some useful statistics of the prediction performance (yhat, yhat_lower, and yhat_upper compared to y), as a function of the distance from the cutoff (how far into the future the prediction was). The statistics computed are mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), mean absolute percent error (MAPE), and coverage of the yhat_lower and yhat_upper estimates. These are computed on a rolling window of the predictions in df_cv after sorting by horizon (ds minus cutoff). By default 10% of the predictions will be included in each window, but this can be changed with the rolling_window argument.


In [6]:
%%R
df.p <- performance_metrics(df.cv)
head(df.p)


  horizon       mse      rmse       mae       mape  coverage
1 37 days 0.4971086 0.7050593 0.5075009 0.05882459 0.6765646
2 38 days 0.5029463 0.7091870 0.5125229 0.05940706 0.6765646
3 39 days 0.5252677 0.7247535 0.5186555 0.06001158 0.6751942
4 40 days 0.5326181 0.7298069 0.5215775 0.06032500 0.6788488
5 41 days 0.5401377 0.7349406 0.5226521 0.06041353 0.6838739
6 42 days 0.5438937 0.7374915 0.5230473 0.06043453 0.6891275

In [7]:
from fbprophet.diagnostics import performance_metrics
df_p = performance_metrics(df_cv)
df_p.head()


Out[7]:
horizon mse rmse mae mape coverage
0 37 days 0.495378 0.703831 0.505713 0.058593 0.680448
1 38 days 0.501134 0.707908 0.510680 0.059169 0.679077
2 39 days 0.523334 0.723418 0.516755 0.059766 0.677707
3 40 days 0.530625 0.728440 0.519645 0.060075 0.678849
4 41 days 0.538117 0.733565 0.520663 0.060156 0.686386

Cross validation performance metrics can be visualized with plot_cross_validation_metric, here shown for MAPE. Dots show the absolute percent error for each prediction in df_cv. The blue line shows the MAPE, where the mean is taken over a rolling window of the dots. We see for this forecast that errors around 5% are typical for predictions one month into the future, and that errors increase up to around 11% for predictions that are a year out.


In [8]:
%%R -w 10 -h 6 -u in
plot_cross_validation_metric(df.cv, metric = 'mape')



In [9]:
from fbprophet.plot import plot_cross_validation_metric
fig = plot_cross_validation_metric(df_cv, metric='mape')


The size of the rolling window in the figure can be changed with the optional argument rolling_window, which specifies the proportion of forecasts to use in each rolling window. The default is 0.1, corresponding to 10% of rows from df_cv included in each window; increasing this will lead to a smoother average curve in the figure.

The initial period should be long enough to capture all of the components of the model, in particular seasonalities and extra regressors: at least a year for yearly seasonality, at least a week for weekly seasonality, etc.