This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

2.4 Rolling origin

Remark. The text in this section is based on the vignette for the greybox package, written by the author of this textbook.

When there is a need to select the most appropriate forecasting model or method for the data, the forecaster usually splits the sample into two parts: in-sample (aka “training set”) and holdout sample (aka out-sample or “test set”). The model is estimated on the in-sample and its forecasting performance evaluated using some error measure on the holdout sample.

Using this procedure only once is known as “fixed origin” evaluation. Evaluating a forecast thus, however, might give a misleading impression of its accuracy. If, for example, the time series contains outliers or level shifts a poor model might perform better in fixed origin evaluation than a more appropriate one. An alternative procedure known as “rolling origin” evaluation is much more robust to such issues.

In rolling origin evaluation the forecasting origin is repeatedly moved forward and forecasts are produced from each origin (Tashman, 2000). This technique allows obtaining several forecast errors for time series, which gives a better understanding of how the models perform. This can be considered as a time series analogue to cross-validation techniques (Wikipedia, 2020). Here is a simple graphical representation, courtesy of Nikos Kourentzes.

Rolling origin illustrated, by Nikos Kourentzes

Figure 2.2: Rolling origin illustrated, by Nikos Kourentzes

There are different options of how this can be done.

2.4.1 Principles of Rolling origin

Figure 2.3 (Svetunkov and Petropoulos, 2018) illustrates the basic idea behind rolling origin. White cells correspond to the in-sample data while the light grey cells correspond to the three-steps-ahead forecasts. The time series in the figure has 25 observations and forecasts are produced from 8 origins starting from origin 15. The model is estimated on the first in-sample set and forecasts are produced. Next, another observation is added to the end of the in-sample set, the test set is advanced and the procedure is repeated. The process stops when there is no more data left. This is a rolling origin with a constant holdout sample size. As a result of this procedure 8 one to three steps ahead forecasts are produced. Based on them we can calculate the preferred error measures and choose the best performing model.

Rolling origin with constant holdout size

Figure 2.3: Rolling origin with constant holdout size

Another option for producing forecasts from 8 origins would be to start from origin 17 instead of 15, as shown in Figure 2.4. In this case the procedure continues until origin 22 when the last full set of three-steps-ahead forecasts can be produced produced but then continues with a decreasing forecasting horizon. So the two-steps-ahead forecast is produced from origin 23 and only a one-step-ahead forecast is produced from origin 24. As a result we obtain 8 one-step-ahead forecasts, 7 two-steps-ahead forecasts and 6 three-steps-ahead forecasts. This is a rolling origin with a non-constant holdout sample size, which can be useful with small samples when we don’t have any observations to spare.

Rolling origin with non-constant holdout size

Figure 2.4: Rolling origin with non-constant holdout size

Finally, in both of the cases above we had the increasing in-sample size. However for some research purposes we might need a constant in-sample. Figure 2.5 demonstrates such a setup. In this case, in each iteration we add an observation to the end of the in-sample series and remove one from the beginning (dark grey cells).

Rolling origin with constant in-sample size

Figure 2.5: Rolling origin with constant in-sample size

2.4.2 Rolling origin in R

The function ro() from greybox package (written by Yves Sagaert and Ivan Svetunkov in 2016 on the way to the International Symposium on Forecasting) implements the rolling origin evaluation for any function you like with a predefined call and returns the desired value. It heavily relies on the two variables: call and value - so it is quite important to understand how to formulate them in order to get the desired results. ro() is a very flexible function but as a result it is not very simple. In this subsection we will see how it work on a couple of examples.

We start with a simple example, generating a series from normal distribution:

x <- rnorm(100,100,10)

We use an ARIMA(0,1,1) model implemented in the stats package:

ourCall <- "predict(arima(x=data,order=c(0,1,1)),n.ahead=h)"

The call that we specify includes two important elements: data and h. data specifies where the in-sample values are located in the function that we want to use, and it needs to be called “data” in the call. h will tell our function, where the forecasting horizon is specified in the selected function. Note that in this example we use arima(x=data,order=c(0,1,1)), which produces a desired ARIMA(0,1,1) model and then we use predict(..., n.ahead=h), which produces an h steps ahead forecast from that model.

Having the call, we need also to specify what the function should return. This can be the conditional mean (point forecasts), prediction intervals, the parameters of a model, or, in fact, anything that the model returns (e.g. name of the fitted model and its likelihood). However, there are some differences in what ro() returns depending on what the function returns. If it is a vector, then ro() will produce a matrix (with values for each origin in columns). If it is a matrix then an array is returned. Finally, if it is a list, then a list of lists is returned.

In order not to overcomplicate things, we will collect the conditional mean from the predict() function:

ourValue <- c("pred")

NOTE: If you do not specify the value to return, the function will try to return everything, but it might fail, especially if a lot of values are returned. So, in order to be on the safe side, always provide the value, when possible.

Now that we have specified ourCall and ourValue, we can produce forecasts from the model using rolling origin. Let’s say that we want three-steps-ahead forecasts and 8 origins with the default values of all the other parameters:

returnedValues1 <- ro(x, h=3, origins=8, call=ourCall, value=ourValue)

The function returns a list with all the values that we asked for plus the actual values from the holdout sample. We can calculate some basic error measure based on those values, for example, scaled Mean Absolute Error (Petropoulos and Kourentzes, 2015):

apply(abs(returnedValues1$holdout - returnedValues1$pred),1,mean,na.rm=TRUE) /
            mean(returnedValues1$actuals)
##         h1         h2         h3 
## 0.05436554 0.06335097 0.06748928

In this example we use apply() function in order to distinguish between the different forecasting horizons and have an idea of how the model performs for each of them. These numbers do not tell us much on their own, but if we compared the performance of this model with another one, then we could infer if one model is more appropriate for the data than the other one. For example, applying ARIMA(1,1,2) to the same data, we will get:

ourCall <- "predict(arima(x=data,order=c(1,1,2)),n.ahead=h)"
returnedValues2 <- ro(x, h=3, origins=8, call=ourCall, value=ourValue)
apply(abs(returnedValues2$holdout - returnedValues2$pred),1,mean,na.rm=TRUE) /
            mean(returnedValues2$actuals)
##         h1         h2         h3 
## 0.05479787 0.06398497 0.06733754

Comparing these errors with the ones from the previous model, we can conclude, which of the approaches is more adequate for the data.

We can also plot the forecasts from the rolling origin, which shows how the selected model behaves:

par(mfcol=c(2,1))
plot(returnedValues1)
plot(returnedValues2)

In this example the forecasts from different origins are close to each other. This is because the data is stationary and the model is quite stable.

The rolling origin function from the greybox package also allows working with explanatory variables and returning prediction intervals if needed. Some further examples are discussed in the vignette of the package: vignette("ro","greybox").

Practically speaking, if we have a set of forecasts from different models we can analyse the distribution of error measures and come to conclusions about performance of models. Here is an example with analysis of performance for \(h=1\) based on absolute errors:

aeValuesh1 <- cbind(abs(returnedValues1$holdout - returnedValues1$pred)[1,],
                    abs(returnedValues1$holdout - returnedValues2$pred)[1,])
colnames(aeValuesh1) <- c("ARIMA(0,1,1)","ARIMA(1,1,2)")
boxplot(aeValuesh1)
points(apply(aeValuesh1,2,mean),pch=16,col="red")

The boxplots above can be interpreted as any other boxplots applied to random variables (see for example, discussion in Section 2.2 of Svetunkov (2021c)).

References

• Petropoulos, F., Kourentzes, N., 2015. Forecast combinations for intermittent demand. Journal of the Operational Research Society. 66, 914–924. https://doi.org/10.1057/jors.2014.62
• Svetunkov, I., 2021c. Statistics for business analytics. https://openforecast.org/sba/ (version: [01.09.2021])
• Svetunkov, I., Petropoulos, F., 2018. Old dog, new tricks: a modelling view of simple moving averages. International Journal of Production Research. 56, 6034–6047. https://doi.org/10.1080/00207543.2017.1380326
• Tashman, L.J., 2000. Out-of-sample tests of forecasting accuracy: An analysis and review. International Journal of Forecasting. 16, 437–450. https://doi.org/10.1016/S0169-2070(00)00065-0
• Wikipedia, 2020. Cross-validation (statistics). https://en.wikipedia.org/wiki/Cross-validation_(statistics) (version: 2020-11-04)