## 9.2 Recursive relation

Both additive and multiplicative ARIMA models can be written in the recursion form, similar to pure additive ETS (see Section 5.2). For the pure additive ARIMA: $$$y_{t+h} = \sum_{i=1}^K \mathbf{w}_{i}^\prime \mathbf{F}_{i}^{\lceil\frac{h}{i}\rceil-1} \mathbf{v}_{t} + \mathbf{w}_{i}' \sum_{j=1}^{\lceil\frac{h}{i}\rceil-1} \mathbf{F}_{i}^{j-1} \mathbf{g}_{i} \epsilon_{t+i\lceil\frac{h}{i}\rceil-j} + \epsilon_{t+h} , \tag{9.21}$$$ and for the pure multiplicative one: $$$\log y_{t+h} = \sum_{i=1}^K \mathbf{w}_{i}^\prime \mathbf{F}_{i}^{\lceil\frac{h}{i}\rceil-1} \log \mathbf{v}_{t} + \mathbf{w}_{i}' \sum_{j=1}^{\lceil\frac{h}{i}\rceil-1} \mathbf{F}_{i}^{j-1} \mathbf{g}_{i} \log (1+\epsilon_{t+i\lceil\frac{h}{i}\rceil-j}) + \log(1+ \epsilon_{t+h}) , \tag{9.22}$$$ where $$i$$ corresponds to each lag of the model from 1 to $$K$$, $$\mathbf{w}_{i}$$ is the measurement vector, $$\mathbf{g}_{i}$$ is the persistence vector, both including only $$i$$-th elements, $$\mathbf{F}_{i}$$ is the transition matrix, including only $$i$$-th column. Based on these recursions, point forecasts can be produced from the additive and multiplicative ARIMA models, which will be respectively: $$$\hat{y}_{t+h} = \sum_{i=1}^K \mathbf{w}_{i}^\prime \mathbf{F}_{i}^{\lceil\frac{h}{i}\rceil-1} \mathbf{v}_{t} \tag{9.23}$$$ and: $$$\hat{y}_{t+h} = \exp\left(\sum_{i=1}^K \mathbf{w}_{i}^\prime \mathbf{F}_{i}^{\lceil\frac{h}{i}\rceil-1} \log \mathbf{v}_{t} \right) . \tag{9.24}$$$ Note however that similarly to the multiplicative ETS, the point forecasts of logARIMA will not necessarily coincide with the conditional expectations. Still, based on the recursions (9.21) and (9.22), we can calculate conditional moments of ADAM ARIMA.

### 9.2.1 Conditional moments of ADAM ARIMA

In case of the pure additive ARIMA, the moments correspond to the ones for ETS, discussed in Section 5.1 and follow directly from (9.21): \begin{equation*} \begin{aligned} \mu_{y,t+h} = \mathrm{E}(y_{t+h}|t) = & \sum_{i=1}^K \left(\mathbf{w}_{i}' \mathbf{F}_{i}^{\lceil\frac{h}{i}\rceil-1} \right) \mathbf{v}_{t} \\ \sigma^2_{h} = \mathrm{V}(y_{t+h}|t) = & \left( \sum_{i=1}^K \left(\mathbf{w}_{i}' \sum_{j=1}^{\lceil\frac{h}{i}\rceil-1} \mathbf{F}_{i}^{j-1} \mathbf{g}_{i} \mathbf{g}'_{i} (\mathbf{F}_{i}')^{j-1} \mathbf{w}_{i} \right) + 1 \right) \sigma^2 \end{aligned} . \end{equation*} When it comes to the multiplicative ARIMA model, the conditional moments would depend on the assumed distribution and might become quite complicated. Here is an example of the conditional logarithmic mean for log-normal distribution, assuming that $$(1+\epsilon_t) \sim \mathrm{log}\mathcal{N}\left(\frac{\sigma^2}{2},\sigma^2 \right)$$ based on (9.22): $$$\mu_{\log y,t+h} = \mathrm{E}(\log y_{t+h}|t) = \sum_{i=1}^d \left(\mathbf{w}_{m_i}' \mathbf{F}_{m_i}^{\lceil\frac{h}{m_i}\rceil-1} \right) \log \mathbf{v}_{t} -\left(\mathbf{w}_{i}' \sum_{j=1}^{\lceil\frac{h}{i}\rceil-1} \mathbf{F}_{i}^{j-1} \mathbf{g}_{i} + 1\right) \frac{\sigma^2}{2} . \tag{9.25}$$$ Note that the variance conditional logarithmic variance of the model will be the same for all logARIMA models, independent of the distributional assumptions: $$$\sigma^2_{\log y,h} = \mathrm{V}(\log y_{t+h}|t) = \left( \sum_{i=1}^d \left(\mathbf{w}_{m_i}' \sum_{j=1}^{\lceil\frac{h}{m_i}\rceil-1} \mathbf{F}_{m_i}^{j-1} \mathbf{g}_{m_i} \mathbf{g}'_{m_i} (\mathbf{F}_{m_i}')^{j-1} \mathbf{w}_{m_i} \right) + 1 \right) \sigma^2 . \tag{9.26}$$$ The obtained logarithmic moments can then be used to get the ones in the original scale, after using the connection between the moments in log-normal distribution. The conditional expectation and variance in this case can be calculated as: \begin{aligned} & \mu_{y,t+h} = \mathrm{E}(y_{t+h}|t) = \exp \left(\mu_{\log y,t+h} + \frac{\sigma^2_{\log y,h}}{2} \right) \\ & \sigma^2_{h} = \mathrm{V}(y_{t+h}|t) = \left(\exp\left( \sigma^2_{\log y,h} \right) -1 \right)\exp\left(2 \times \mu_{\log y,t+h} + \sigma^2_{\log y,h} \right) . \end{aligned} \tag{9.27} Inserting the values (9.25) and (9.26) in (9.27), we will get the analytical solutions for the two moments.

If some other distributions are assumed in the model, then the conditional logarithmic mean would change because the variable $$\log(1+\epsilon_t)$$ would follow a different distribution with a different mean. For example:

1. Gamma: if $$\left(1+\epsilon_t \right) \sim \mathcal{\Gamma}(\sigma^{-2}, \sigma^2)$$, then $$\log\left(1+\epsilon_t \right) \sim \mathrm{exp}\mathcal{\Gamma}(\sigma^{-2}, \sigma^2)$$, which is exponential Gamma distribution, which has the following logarithmic mean: $$\mathrm{E}(\log(1+\epsilon_t)) = \psi\left(\sigma^{-2}\right)+2\log(\sigma)$$;
2. Inverse Gaussian: if $$\left(1+\epsilon_t \right) \sim \mathcal{IG}(1, \sigma^2)$$, then $$\log\left(1+\epsilon_t \right) \sim \mathrm{exp}\mathcal{IG}(1, \sigma^2)$$, exponential Inverse Gaussian distribution, which does not have a simple formula for the logarithmic mean (but it can be calculated based on its connection with Generalised $$\mathcal{IG}$$ and formulae provided in Sichel et al., 1997).

After that, similarly to how it was done for Log-Normal distribution above, the connection between the logarithmic and normal moments should be used to get the conditional expectation and variance. If these relations are not available or are too complicated, then simulations can be used to obtain the numeric approximations (see discussion in Section 18.1.1).

Finally, we should remark that the formulae for the conditional moments in logARIMA are complicated mainly because of the distributional assumptions inherited from ETS. However, this allows the construction of more complicated models, some of which are discussed in Section 9.4.

### 9.2.2 Parameters bounds

Finally, modifying the recursions (9.21) and (9.22), we can get the stability condition for the parameters, similar to the one for pure additive ETS from Section 5.4. The advantage of the pure multiplicative ARIMA formulated in the form (9.16) is that the adequate stability condition can be obtained in contrast with the pure multiplicative ETS models. It will be the same as the pure additive ARIMA and/or ETS. The ARIMA model will be stable, when the absolute values of all non-zero eigenvalues of the discount matrices $$\mathbf{D}_{i}$$ are lower than one, given that: $$$\mathbf{D}_{i} = \mathbf{F}_{i} -\mathbf{g}_{i} \mathbf{w}_{i}' . \tag{9.28}$$$ Hyndman et al. (2008) show that the stability condition for SSOE models corresponds to the invertibility condition of ARIMA (Section 8.2.4), so the model can either be checked via the discount matrix (9.28) or via the MA polynomials (8.58).

When it comes to stationarity, state space ARIMA is always non-stationary if the differences $$\mathbf{D}_j \neq 0$$ for any $$j$$. So, there needs to be a different mechanism for the stationarity check. The simplest thing to do would be to expand the AR(p) polynomials, ignoring I(d), fill in the transition matrix $$\mathbf{F}$$ and then calculate its eigenvalues. If they are lower than one by absolute value, the model is stationary. The same condition can be checked via the roots of the polynomial of AR(p) (8.57). However, the eigenvalues approach is more computationally efficient, and I recommend using it instead of the conventional polynomials calculation.

If both stability and stationarity conditions for ARIMA are satisfied, we will call the bounds that the AR / MA parameters form “admissible”, similar to how they are called in ETS. Note that ARIMA has no “usual” or “traditional” bounds.

### References

• Hyndman, R.J., Koehler, A.B., Ord, J.K., Snyder, R.D., 2008. Forecasting with Exponential Smoothing. Springer Berlin Heidelberg.
• Sichel, H.S., Dohm, C.E., Kleingeld, W.J., 1997. The logarithmic generalized inverse Gaussian distribution (LNGIG). South African Statistical Journal. 31, 125–149. https://hdl.handle.net/10520/AJA0038271X{\_}560