There are different ways to formulate and implement ARIMA. The one discussed in Chapter 8 is the conventional way. The model, in that case, can be estimated directly, assuming that its initialisation happens at some point before the Big Bang: the conventional ARIMA assumes that there is no starting point of the model. The idea is that we observe a specific piece of data from a population without any beginning or end. Obviously, this assumption is idealistic and does not necessarily agree with reality (imagine the series of infinitely lasting sales of Siemens S45 mobile phones. Do you even remember such a thing?).
But besides the conventional formulation, there are also state space forms of ARIMA, the most relevant to our topic being the one implemented in SSOE form (Chapter 11 of Hyndman et al., 2008). Svetunkov and Boylan (2020) adapted this state space model for supply chain forecasting, developing an order selection mechanism, sidestepping the hypotheses testing and focusing on information criteria. However, the main limitation of that approach is that the resulting ARIMA model works very slow on the high frequency data with several seasonal patterns (because the model was formulated based on the conventional SSOE). Luckily, the SSOE used in ADAM (introduced in Section 5.1) addresses this issue. This model is already implemented in the
msarima() function of the
smooth package and was also used as the basis for the ADAM ARIMA.
In this chapter, we discuss the state space ADAM ARIMA for both pure additive and pure multiplicative cases. We then explore the conditional moments from the model and parameter space and move to the distributional assumptions of the model (including the conditional distributions). We conclude the chapter with the discussion of the implications of the ETS+ARIMA model. The latter has not been discussed in the literature and might make the model unidentifiable, so an analyst using the combination should be cautious.