This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

## 14.2 ADAM ARIMA order selection

While ETS has 30 models to choose from, ARIMA has many more options. For example, selecting the non-seasonal ARIMA with / without constant restricting the orders with $$p \leq 3$$, $$d \leq 2$$ and $$q \leq 3$$ leads to the combination of $$3 \times 2 \times 3 \times 2 = 36$$ possible ARIMA models. If we increase the possible orders to 5 or even more, we will need to go through hundreds of models. Adding the seasonal part increases this number by an order of magnitude. This means that we cannot just test all possible ARIMA models and select the most appropriate one, we need to be smart in the selection proces.

Hyndman and Khandakar (2008) developed an efficient mechanism of ARIMA order selection based on statistical tests (for stationarity and seasonality), reducing the number of models to test to reasonable ammount. I. Svetunkov and Boylan (2020b) developed an alternative mechanism, relying purely on information criteria, which works especially well on seasonal data, but potentially may lead to models that overfit the data (this is implemented in auto.ssarima() and auto.msarima() functions in smooth package). We also have the Box-Jenkins approach for ARIMA orders selection, which relies on the analysis of ACF and PACF, but we should not forget the limitations of that approach. Finally, Sagaert and Svetunkov (2021) proposed the stepwise trace forward approach, which relies on partial correlations and uses the information criteria to test model on each iteration. Building upon all of that, I have developed the following algorithm for order selection of ADAM ARIMA:

1. Determine the order of differences by fitting all possible combinations of ARIMA models with $$P_j=0$$ and $$Q_j=0$$ for all lags $$j$$. This includes trying the models with and without the constant term. The order $$D_j$$ is then determined via the model with the lowest IC;
2. Then iteratively, starting from the highest seasonal lag and moving to the lag of 1 do for every lag $$m_j$$:
1. Calculate ACF of residuals of the model;
2. Find the highest value of autocorrelation coefficient that corresponds to the multiple of the respective seasonal lag $$m_j$$;
3. Define, what should be the order of MA based on the lag of the autocorrelation coefficient on the previous step and include it in the ARIMA model;
4. Calculate IC, and if it is lower than for the previous best model, leave the new MA order;
5. Repeat (a) - (d) while there is an improvement in IC;
6. Do steps (a) - (e) for AR order, substituting ACF with PACF of the residuals of the best model;
7. Move to the next seasonal lag;
1. Try out several restricted ARIMA models of the order $$q=d$$ (this is based on (1) and the restrictions provide by the user). The motivation for this comes from the idea of relation between ARIMA and ETS.

As you can see, this algorithm relies on the idea of Box-Jenkins methodology, but takes it with a pinch of salt, checking every time if the proposed order is improving the model or not. The motivation for doing MA orders before AR is based on the understanding of what AR model implies for forecasting. In a way, it is safer to have ARIMA(0,d,q) model than ARIMA(p,d,0), because the former is less prone to overfitting than the latter. Finally, the proposed algorithm is faster than the algorithm of I. Svetunkov and Boylan (2020b) and is more modest in the number of selected orders of the model.

In order to start the algorithm, you would need to provide a parameter select=TRUE in the orders. Here is an example with Box-Jenkins data:

adamARIMAModel <- adam(BJsales, model="NNN", orders=list(ar=3,i=2,ma=3,select=TRUE),
silent=FALSE, h=10, holdout=TRUE)
## Evaluating models with different distributions... default :  Selecting ARIMA orders...
## Selecting differences...
## Selecting ARMA... |-\|/
## The best ARIMA is selected. Done!

adamARIMAModel
## Time elapsed: 0.81 seconds
## Model estimated using auto.adam() function: ARIMA(0,2,2)
## Distribution assumed in the model: Normal
## Loss function type: likelihood; Loss function value: 243.2819
## ARMA parameters of the model:
## MA:
## theta1[1] theta2[1]
##   -0.7515   -0.0109
##
## Sample size: 140
## Number of estimated parameters: 5
## Number of degrees of freedom: 135
## Information criteria:
##      AIC     AICc      BIC     BICc
## 496.5638 497.0115 511.2720 512.3783
##
## Forecast errors:
## ME: 3.224; MAE: 3.339; RMSE: 3.794
## sCE: 14.156%; sMAE: 1.466%; sMSE: 0.028%
## MASE: 2.825; RMSSE: 2.488; rMAE: 0.927; rRMSE: 0.923

In this example, orders=list(ar=3,i=2,ma=3,select=TRUE) tells function that the maximum orders to check are $$p\leq 3$$, $$d\leq 2$$ $$q\leq 3$$.

### References

Hyndman, Rob J, and Yeasmin Khandakar. 2008. “Automatic Time Series Forecasting: The Forecast Package for R.” Journal of Statistical Software 26 (3): 1–22. https://www.jstatsoft.org/article/view/v027i03.

Sagaert, Yves R., and Ivan Svetunkov. 2021. “Variables Selection Using Partial Correlations and Information Criteria.” Department of Management Science, Lancaster University.

Svetunkov, Ivan, and John E. 2020b. “State-space ARIMA for supply-chain forecasting.” International Journal of Production Research 58 (3): 818–27. doi:10.1080/00207543.2019.1600764.