While ETS has 30 models to choose from, ARIMA has many more options. For example, selecting the non-seasonal ARIMA with / without constant restricting the orders with \(p \leq 3\), \(d \leq 2\) and \(q \leq 3\) leads to the combination of \(3 \times 2 \times 3 \times 2 = 36\) possible ARIMA models. If we increase the possible orders to 5 or even more, we will need to go through hundreds of models. Adding the seasonal part increases this number by order of magnitude. This means that we cannot just test all possible ARIMA models and select the most appropriate one. We need to be smart in the selection process.
Hyndman and Khandakar (2008) developed an efficient mechanism of ARIMA order selection based on statistical tests (for stationarity and seasonality), reducing the number of models to test to a reasonable amount. Svetunkov and Boylan (2020) developed an alternative mechanism, relying purely on information criteria, which works well on seasonal data, but potentially may lead to models that overfit the data (this is implemented in
auto.msarima() functions in
smooth package). We also have the Box-Jenkins approach discussed in Section 8.3 for ARIMA orders selection, which relies on the analysis of ACF (Subsection 8.3.2) and PACF (Subsection 8.3.3). Still, we should not forget the limitations of that approach (Subsection 8.3.4). Finally, Sagaert and Svetunkov (2022) proposed the stepwise trace forward approach (discussed briefly in Section 15.3), which relies on partial correlations and uses the information criteria to test the model on each iteration. Building upon all of that, I have developed the following algorithm for order selection of ADAM ARIMA:
- Determine the order of differences by fitting all possible combinations of ARIMA models with \(P_j=0\) and \(Q_j=0\) for all lags \(j\). This includes trying the models with and without the constant term. The order \(D_j\) is then determined via the model with the lowest IC;
- Then iteratively, starting from the highest seasonal lag and moving to the lag of 1 do for every lag \(m_j\):
- Calculate ACF of residuals of the model;
- Find the highest value of autocorrelation coefficient that corresponds to the multiple of the respective seasonal lag \(m_j\);
- Define what should be the order of MA based on the lag of the autocorrelation coefficient on the previous step and include it in the ARIMA model;
- Calculate IC, and if it is lower than for the previous best model, keep the new MA order;
- Repeat (a) – (d) while there is an improvement in IC;
- Do steps (a) – (e) for AR order, substituting ACF with PACF of the residuals of the best model;
- Move to the next seasonal lag and go to step (a);
- Try out several restricted ARIMA models of the order \(q=d\) (this is based on (1) and the restrictions provided by the user). The motivation for this comes from the idea of the relation between ARIMA and ETS (Section 8.4).
As you can see, this algorithm relies on the Box-Jenkins methodology but takes it with a pinch of salt, checking every time if the proposed order is improving the model or not. The motivation for doing MA orders before AR is based on understanding what the AR model implies for forecasting (Section 8.1.1). In a way, it is safer to have ARIMA(0,d,q) model than ARIMA(p,d,0) because the former is less prone to overfitting than the latter. Finally, the proposed algorithm is faster than the algorithm of Svetunkov and Boylan (2020) and is more modest in the number of selected orders of the model.
In R, in order to start the algorithm, you would need to provide a parameter
select=TRUE in the
orders. Here is an example with Box-Jenkins sales data:
<- adam(BJsales, model="NNN", adamARIMAModel orders=list(ar=3,i=2,ma=3,select=TRUE), h=10, holdout=TRUE)
In this example,
orders=list(ar=3,i=2,ma=3,select=TRUE) tells function that the maximum orders to check are \(p\leq 3\), \(d\leq 2\) \(q\leq 3\). The resulting model is ARIMA(0,2,2), which has the fit shown in Figure 15.1.
The resulting model will be parsimonious when optimal initials are used. If we want to have a more flexible model, we can use a different initialisation, and in some cases, the algorithm will select a model with higher orders of AR, I and MA.
Based on the relation between ARIMA and ETS (see Section 8.4), we do not need to test some of the combinations of models when selecting ARIMA orders. For example, if we already consider ETS(A,N,N), we do not need to check the ARIMA(0,1,1) model. The recommendations for what to skip in different circumstances have been discussed in Section 9.4. Still, there are various ways to construct an ETS + ARIMA model, with different sequences between ETS selection / ARIMA selection. We suggest starting with ETS, then going to the selection of ARIMA orders. This way, we are building upon the robust forecasting model and see if it can be improved further by introducing elements that are not there. Note that given the complexity of the task of estimating all parameters for ETS and ARIMA, it is advised to use backcasting (see Section 11.4.1) for the initialisation of such model. Here is an example in R:
<- adamETSARIMAModel adam(AirPassengers, model="PPP", orders=list(ar=c(3,3),i=c(2,1),ma=c(3,3),select=TRUE), h=10, holdout=TRUE, initial="back") adamETSARIMAModel
## Time elapsed: 1.11 seconds ## Model estimated using auto.adam() function: ETS(MMM)+SARIMA(3,0,0)(1,0,0) ## Distribution assumed in the model: Gamma ## Loss function type: likelihood; Loss function value: 468.391 ## Persistence vector g: ## alpha beta gamma ## 0.5109 0.0046 0.0000 ## ## ARMA parameters of the model: ## AR: ## phi1 phi2 phi3 phi1 ## 0.2154 0.2296 -0.0402 0.2084 ## ## Sample size: 134 ## Number of estimated parameters: 8 ## Number of degrees of freedom: 126 ## Information criteria: ## AIC AICc BIC BICc ## 952.7819 953.9339 975.9647 978.7858 ## ## Forecast errors: ## ME: 3.416; MAE: 16.347; RMSE: 20.154 ## sCE: 12.908%; Asymmetry: 31.9%; sMAE: 6.178%; sMSE: 0.58% ## MASE: 0.681; RMSSE: 0.646; rMAE: 0.164; rRMSE: 0.163
The resulting model is ETS(M,M,M) with AR elements: three non-seasonal and one seasonal AR, which improve the fit of the model and hopefully result in more accurate forecasts.