This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

10.2 Recursive relation

Both additive and multiplicative ARIMA models can be written in the recursion form, similar to pure additive ETS. The formulae would be cumbersome in this case, but would have closed forms. Here they are for the pure additive ARIMA: \[\begin{equation} y_{t+h} = \sum_{i=1}^K \mathbf{w}_{i}' \mathbf{F}_{i}^{\lceil\frac{h}{i}\rceil-1} \mathbf{v}_{t} + \mathbf{w}_{i}' \sum_{j=1}^{\lceil\frac{h}{i}\rceil-1} \mathbf{F}_{i}^{j-1} \mathbf{g}_{i} \epsilon_{t+i\lceil\frac{h}{i}\rceil-j} + \epsilon_{t+h} , \tag{10.15} \end{equation}\] and for the pure multiplicative one: \[\begin{equation} \log y_{t+h} = \sum_{i=1}^K \mathbf{w}_{i}' \mathbf{F}_{i}^{\lceil\frac{h}{i}\rceil-1} \log \mathbf{v}_{t} + \mathbf{w}_{i}' \sum_{j=1}^{\lceil\frac{h}{i}\rceil-1} \mathbf{F}_{i}^{j-1} \mathbf{g}_{i} \log (1+\epsilon_{t+i\lceil\frac{h}{i}\rceil-j}) + \log(1+ \epsilon_{t+h}) , \tag{10.16} \end{equation}\]

where \(i\) corresponds to each lag of the model from 1 to \(K\), \(\mathbf{w}_{i}\) is the measurement vector, \(\mathbf{g}_{i}\) is the persistence vector, both including only \(i\)-th elements, \(\mathbf{F}_{i}\) is the transition matrix, including only \(i\)-th column. Based on this recursion, we can calculate conditional moments of ADAM ARIMA.

10.2.1 Moments of ADAM ARIMA

In case of the pure additive ARIMA model, the moments correspond to the ones discussed in the pure additive ETS section and follows directly from (10.15): \[\begin{equation*} \begin{aligned} \mu_{y,t+h} = \mathrm{E}(y_{t+h}|t) = & \sum_{i=1}^K \left(\mathbf{w}_{i}' \mathbf{F}_{i}^{\lceil\frac{h}{i}\rceil-1} \right) \mathbf{v}_{t} \\ \sigma^2_{h} = \mathrm{V}(y_{t+h}|t) = & \left( \sum_{i=1}^K \left(\mathbf{w}_{i}' \sum_{j=1}^{\lceil\frac{h}{i}\rceil-1} \mathbf{F}_{i}^{j-1} \mathbf{g}_{i} \mathbf{g}'_{i} (\mathbf{F}_{i}')^{j-1} \mathbf{w}_{i} \right) + 1 \right) \sigma^2 \end{aligned} . \end{equation*}\] When it comes to the multiplicative ARIMA model, using the same idea with recursive relation as in the pure additive ETS section, we can obtain the logarithmic moments based on (10.16): \[\begin{equation} \begin{aligned} \mu_{\log y,t+h} = \mathrm{E}(\log y_{t+h}|t) = & \sum_{i=1}^d \left(\mathbf{w}_{m_i}' \mathbf{F}_{m_i}^{\lceil\frac{h}{m_i}\rceil-1} \right) \log \mathbf{v}_{t} \\ \sigma^2_{\log y,h} = \mathrm{V}(\log y_{t+h}|t) = & \left( \sum_{i=1}^d \left(\mathbf{w}_{m_i}' \sum_{j=1}^{\lceil\frac{h}{m_i}\rceil-1} \mathbf{F}_{m_i}^{j-1} \mathbf{g}_{m_i} \mathbf{g}'_{m_i} (\mathbf{F}_{m_i}')^{j-1} \mathbf{w}_{m_i} \right) + 1 \right) \sigma_{\log (1+\epsilon)}^2 \end{aligned}, \tag{10.17} \end{equation}\] where \(\sigma_{\log (1+\epsilon)}^2\) is the variance of the error term in logarithms. The obtained logarithmic moments can then be used to get the ones in the original scale, after making assumptions about the distribution of the random variable. For example, if we assume that \(\left(1+\epsilon_t \right) \sim \mathrm{log}\mathcal{N}\left(-\frac{\sigma_{\log (1+\epsilon)}^2}{2}, \sigma_{\log (1+\epsilon)}^2\right)\), then the conditional expectation and variance can be calculated as: \[\begin{equation} \begin{aligned} & \mu_{y,t+h} = \mathrm{E}(y_{t+h}|t) = \exp \left(\mu_{\log y,t+h} + \frac{\sigma^2_{\log y,h}}{2} \right) \\ & \sigma^2_{h} = \mathrm{V}(y_{t+h}|t) = \left(\exp\left( \sigma^2_{\log y,h} \right) - 1 \right)\exp\left(2 \times \mu_{\log y,t+h} + \sigma^2_{\log y,h} \right) \end{aligned}. \tag{10.18} \end{equation}\]

If some other distributions are assumed in the model, then the connection between the logarithmic and normal moments should be used in order to get the conditional expectation and variance. If these relations are not available, then simulations can be used in order to obtain the numeric approximations.

10.2.2 Parameters bounds

Finally, modifying the recursions (10.15) and (10.16), we can get the stability condition for the parameters, similar to the one for pure additive ETS. The advantage of the pure multiplicative ARIMA formulated in the form (10.14) is that the adequate stability condition can be obtained. In fact, it will be the same as for the pure additive ARIMA and / or ETS. The ARIMA model will be stable, when the absolute values of all non-zero eigenvalues of the discount matrices \(\mathbf{D}_{i}\) are lower than one, given that: \[\begin{equation} \mathbf{D}_{i} = \mathbf{F}_{i} - \mathbf{g}_{i} \mathbf{w}_{i}' . \tag{10.19} \end{equation}\]

Hyndman et al. (2008) shows that the stability condition corresponds to the invertibility condition of ARIMA, so the model can either be checked via the discount matrix (10.19) or via the MA polynomials (9.55).

When it comes to stationarity, state space ARIMA is always non-stationary if the differences \(d \neq 0\). So, there needs to be a different mechanism for the stationarity check. The simplest thing to do would be to expand the AR(p) polynomials, ignoring I(d), fill in the transition matrix \(\mathbf{F}\) and then calculate its eigenvalues. If they are lower than one by absolute values, then the model is stationary. The same condition can be checked via the roots of polynomial of AR(p) (9.54).

If both stability and stationarity conditions for ARIMA are satisfied, then we will call the bounds that the AR / MA parameters form "admissible", similar to how they are called in ETS. Note that there are no "usual" or "traditional" bounds for ARIMA.

References

Hyndman, Rob J., Anne B. Koehler, J. Keith Ord, and Ralph D. Snyder. 2008. Forecasting with Exponential Smoothing. Springer Berlin Heidelberg.