2.4 Rolling origin

Remark. The text in this section is based on the vignette for the greybox package, written by the author of this monograph.

When there is a need to select the most appropriate forecasting model or method for the data, the forecaster usually splits the sample into two parts: in-sample (aka “training set”) and holdout sample (aka out-sample or “test set”). The model is estimated on the in-sample, and its forecasting performance is evaluated using some error measure on the holdout sample.

Using this procedure only once is known as “fixed origin” evaluation. However, this might give a misleading impression of the accuracy of forecasting methods. If, for example, the time series contains outliers or level shifts, a poor model might perform better in fixed origin evaluation than a more appropriate one just by chance. So it makes sense to have a more robust evaluation technique, where the model’s performance is evaluated several times, not just once. An alternative procedure known as “rolling origin” evaluation is one such technique.

In rolling origin evaluation, the forecasting origin is repeatedly moved forward by a fixed number of observations, and forecasts are produced from each origin (Tashman, 2000). This technique allows obtaining several forecast errors for time series, which gives a better understanding of how the models perform. This can be considered a time series analogue to cross-validation techniques (Wikipedia, 2020). Here is a simple graphical representation, courtesy of Nikos Kourentzes.

Rolling origin illustrated, by Nikos Kourentzes

Figure 2.4: Rolling origin illustrated, by Nikos Kourentzes

The plot in Figure 2.4 shows how the origin moves forward and the point and interval forecasts of the model change. As a result, this procedure gives information about the performance of the model over a set of observations, not on a random one.

There are different options of how this can be done.

2.4.1 Principles of Rolling origin

Figure 2.5 (Svetunkov and Petropoulos, 2018) illustrates the basic idea behind rolling origin. White cells correspond to the in-sample data, while the light grey cells correspond to the three-steps-ahead forecasts. The time series in the figure has 25 observations, and forecasts are produced for eight origins starting from observation 15. The model is estimated on the first in-sample set, and forecasts are created for the holdout. Next, another observation is added to the end of the in-sample set, the test set is advanced, and the procedure is repeated. The process stops when there is no more data left. This is a rolling origin with a constant holdout sample size. As a result of this procedure, eight one to three steps ahead forecasts are produced. Based on them, we can calculate the preferred error measures and choose the best performing model (see Section 2.1.2).

Rolling origin with constant holdout size

Figure 2.5: Rolling origin with constant holdout size

Another option for producing forecasts via rolling origin would be to continue with rolling origin even when the test sample is smaller than the forecast horizon, as shown in Figure 2.6. In this case, the procedure continues until origin 22, when the last complete set of three-steps-ahead forecasts can be produced but continues with a decreasing forecasting horizon. So the two-steps-ahead forecast is produced from origin 23, and only a one-step-ahead forecast is produced from origin 24. As a result, we obtain ten one-step-ahead forecasts, nine two-steps-ahead forecasts and eight three-steps-ahead forecasts. This is a rolling origin with a non-constant holdout sample size, which can be helpful with small samples when not enough observations are available.

Rolling origin with non-constant holdout size

Figure 2.6: Rolling origin with non-constant holdout size

Finally, in both cases above, we had the increasing in-sample size. However, we might need a constant in-sample for some research purposes. Figure 2.7 demonstrates such a setup. In this case, in each iteration, we add an observation to the end of the in-sample series and remove one from the beginning (dark grey cells).

Rolling origin with constant in-sample size

Figure 2.7: Rolling origin with constant in-sample size

2.4.2 Rolling origin in R

The function ro() from the greybox package (written by Yves Sagaert and Ivan Svetunkov in 2016 on the way to the International Symposium on Forecasting) implements the rolling origin evaluation for any function you like with a predefined call and returns the desired value. It heavily relies on the two variables: call and value – so it is pretty important to understand how to formulate them to get the desired results. ro() is a very flexible function, but as a result, it is not very simple. In this subsection, we will see how it works on a couple of examples.

We start with a simple example, generating a series from normal distribution:

We use an ARIMA(0,1,1) model implemented in the stats package (this model is discussed in Section 8):

The call that we specify includes two important elements: data and h. data specifies where the in-sample values are located in the function that we want to use, and it needs to be called “data” in the call; h will tell our function, where the forecasting horizon is specified in the selected function. Note that in this example we use arima(x=data,order=c(0,1,1)), which produces a desired ARIMA(0,1,1) model and then we use predict(..., n.ahead=h), which produces an h steps ahead forecast from that model.

Having the call, we also need to specify what the function should return. This can be the conditional mean (point forecasts), prediction intervals, the parameters of a model, or, in fact, anything that the model returns (e.g. name of the fitted model and its likelihood). However, there are some differences in what ro() returns depending on what the function returns. If it is a vector, then ro() will produce a matrix (with values for each origin in columns). If it is a matrix, then an array is returned. Finally, if it is a list, then a list of lists is returned.

In order not to overcomplicate things, we start from collecting the conditional mean from the predict() function:

Remark. If you do not specify the value to return, the function will try to return everything, but it might fail, especially if many values are returned. So, to be on the safe side, always provide the value, when possible.

Now that we have specified ourCall and ourValue, we can produce forecasts from the model using rolling origin. Let’s say that we want three-steps-ahead forecasts and eight origins with the default values of all the other parameters:

The same can be achieved using the following loop:

The function returns a list with all the values that we asked for plus the actual values from the holdout sample. We can calculate some basic error measure based on those values, for example, scaled Absolute Error (Petropoulos and Kourentzes, 2015):

##         h1         h2         h3 
## 0.06600204 0.06600481 0.05304379

In this example, we use the apply() function to distinguish between the different forecasting horizons and have an idea of how the model performs for each of them. These numbers do not tell us much on their own, but if we compare the performance of this model with an alternative one, we could infer if one model is more appropriate for the data than the other one. For example, applying ARIMA(1,1,2) to the same data, we will get:

##         h1         h2         h3 
## 0.07292054 0.07331151 0.05524114

Comparing these errors with the ones from the previous model, we can conclude which of the approaches is more suitable for the data.

We can also plot the forecasts from the rolling origin, which shows how the models behave:

Rolling origin performance of two forecasting methods

Figure 2.8: Rolling origin performance of two forecasting methods

In Figure 2.8, the forecasts from different origins are close to each other. This is because the data is stationary, and both models produce flat lines as forecasts.

The rolling origin function from the greybox package also allows working with explanatory variables and returning prediction intervals if needed. Some further examples are discussed in the vignette of the package: vignette("ro","greybox").

Practically speaking, if we have a set of forecasts from different models we can analyse the distribution of error measures and come to conclusions about performance of models. Here is an example with analysis of performance for \(h=1\) based on absolute errors:

Boxplots of error measures of two methods.

Figure 2.9: Boxplots of error measures of two methods.

The boxplots in Figure 2.9 can be interpreted as any other boxplots applied to random variables (see, for example, discussion in Section 2.2 of I. Svetunkov, 2022a).

References

• Petropoulos, F., Kourentzes, N., 2015. Forecast combinations for intermittent demand. Journal of the Operational Research Society. 66, 914–924. https://doi.org/10.1057/jors.2014.62

• Svetunkov, I., 2022a. Statistics for business analytics. https://openforecast.org/sba/ (version: 31.03.2022)

• Svetunkov, I., Petropoulos, F., 2018. Old dog, new tricks: a modelling view of simple moving averages. International Journal of Production Research. 56, 6034–6047. https://doi.org/10.1080/00207543.2017.1380326

• Tashman, L.J., 2000. Out-of-sample tests of forecasting accuracy: An analysis and review. International Journal of Forecasting. 16, 437–450. https://doi.org/10.1016/S0169-2070(00)00065-0

• Wikipedia, 2020. Cross-validation (statistics). https://en.wikipedia.org/wiki/Cross-validation_(statistics) (version: 2020-11-04)