This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

## 9.3 Distributional assumptions of ADAM ARIMA

Following the same idea as in pure additive and pure multiplicative ETS models, we can have state space ARIMA with different distributions, but with distributions aligning more appropriately with the types of models. For additive ARIMA:

1. Normal: $$\epsilon_t \sim \mathcal{N}(0, \sigma^2)$$;
2. Laplace: $$\epsilon_t \sim \mathcal{Laplace}(0, s)$$;
3. S: $$\epsilon_t \sim \mathcal{S}(0, s)$$;
4. Generalised Normal: $$\epsilon_t \sim \mathcal{GN}(0, s, \beta)$$;
5. Asymmetric Laplace: $$\epsilon_t \sim \mathcal{ALaplace}(0, s, \alpha)$$

For multiplicative ARIMA:

1. Inverse Gaussian: $$\left(1+\epsilon_t \right) \sim \mathcal{IG}(1, s)$$;
2. Log Normal: $$\left(1+\epsilon_t \right) \sim \text{log}\mathcal{N}\left(-\frac{\sigma^2}{2}, \sigma^2\right)$$;
3. Gamma: $$\left(1+\epsilon_t \right) \sim \mathcal{\Gamma}(s^{-1}, s)$$.

The restrictions imposed on the parameters of the model correspond to the ones for ETS: in case of pure additive models, they ensure that the conditional h steps ahead mean is not impacted by the location of distribution (thus $$\mu_\epsilon=0$$); in case of pure multiplicative models, they ensure that the conditional h steps ahead mean is just equal to the point forecast (thus imposing $$\mathrm{E}(1+\epsilon_t)=1$$).

### 9.3.1 Conditional distributions

When it comes to conditional distribution of variables, ADAM ARIMA with the assumptions discussed above has closed forms for all of them. For example, if we work with additive ARIMA, then according to recursive relation (9.15) the h steps ahead value follows the same distribution but with different conditional mean and variance. For example, if $$\epsilon_t \sim \mathcal{GN}(0, s, \beta)$$, then $$y_{t+h} \sim \mathcal{GN}(\mu_{y,t+h}, s_{h}, \beta)$$, where $$s_{h}$$ is the conditional h steps ahead scale, found from the connection between variance and scale in Generalised Normal distribution via: $\begin{equation*} s_h = \sqrt{\frac{\sigma^2_h \Gamma(1/\beta)}{\Gamma(3/\beta)}}. \end{equation*}$

Using similar principles, we can calculate scale parameters for the other distributions.

When it comes to the multiplicative models, the conditional distribution has the closed form in case of log Normal (it is log Normal as well), but does not have it in case of Inverse Gaussian. In the former case, the logarithmic moments can be directly used to define the parameters of distribution, i.e. if $$\left(1+\epsilon_t \right) \sim \text{log}\mathcal{N}\left(-\frac{\sigma^2}{2}, \sigma^2\right)$$, then $$y_{t+h} \sim \text{log}\mathcal{N}\left(\mu_{\log y,t+h}, \sigma^2_{\log y,h} \right)$$. In the latter case, simulations need to be used in order to get the quantile, cumulative and density functions.