## 9.1 State space ARIMA

### 9.1.1 An example of State Space ARIMA

In order to understand how the state space ADAM ARIMA can be formulated, we consider an arbitrary example of SARIMA(1,1,2)(0,1,0)$$_4$$: $\begin{equation*} {y}_{t} (1- \phi_1 B)(1-B)(1-B^4) = \epsilon_t (1 + \theta_1 B + \theta_2 B^2), \end{equation*}$ which can be rewritten in the expanded form: $\begin{equation*} {y}_{t} (1-\phi_1 B -B + \phi_1 B^2 -B^4 +\phi_1 B^5 + B^5 -\phi_1 B^6) = \epsilon_t (1 + \theta_1 B + \theta_2 B^2), \end{equation*}$ or after moving all the lagged values to the right-hand side: $\begin{equation*} {y}_{t} = (1+\phi_1) {y}_{t-1} -\phi_1 {y}_{t-2} + {y}_{t-4} -(1+\phi_1) {y}_{t-5} + \phi_1 {y}_{t-6} + \theta_1 \epsilon_{t-1} + \theta_2 \epsilon_{t-2} + \epsilon_t . \end{equation*}$ Now we can define the states of the model: \begin{aligned} & v_{1,t-1} = (1+\phi_1) y_{t-1} + \theta_1 \epsilon_{t-1} \\ & v_{2,t-2} = -\phi_1 y_{t-2} + \theta_2 \epsilon_{t-2} \\ & v_{3,t-3} = 0 \\ & v_{4,t-4} = y_{t-4} \\ & v_{5,t-5} = -(1+\phi_1) y_{t-5} \\ & v_{6,t-6} = \phi_1 y_{t-6} \end{aligned} . \tag{9.1} In our example all the MA parameters are zero for $$j>2$$, that is why they disappear from the states above. Furthermore, there are no elements for lag three, so that state can be dropped. The measurement equation of the ARIMA model in this situation can be written as: $\begin{equation*} {y}_{t} = \sum_{j=1,2,4,5,6} v_{j,t-j} + \epsilon_t , \end{equation*}$ based on which the actual value on some lag $$i$$ can also be written as: $$${y}_{t-i} = \sum_{j=1,2,4,5,6} v_{j,t-j-i} + \epsilon_{t-i}. \tag{9.2}$$$ Inserting (9.2) in (9.1) and shifting the lags from $$t-i$$ to $$t$$ in every equation, we get the state space ARIMA: \begin{equation*} \begin{aligned} &{y}_{t} = \sum_{j=1,2,4,5,6} v_{j,t-j} + \epsilon_t \\ & v_{1,t} = (1+\phi_1) \sum_{j=1}^6 v_{j,t-j} + (1+\phi_1+\theta_1) \epsilon_t \\ & v_{2,t} = -\phi_1 \sum_{j=1}^6 v_{j,t-j} + (-\phi_1+\theta_2) \epsilon_t \\ & v_{4,t} = \sum_{j=1}^6 v_{j,t-j} + \epsilon_t \\ & v_{5,t} = -(1+\phi_1) \sum_{j=1}^6 v_{j,t-j} -(1+\phi_1) \epsilon_t \\ & v_{6,t} = \phi_1 \sum_{j=1}^6 v_{j,t-j} + \phi_1 \epsilon_t \end{aligned} . \end{equation*} This model can then be applied to the data, and forecasts can be produced similarly to how it was done for the pure additive ETS model (see Section 5.1). Furthermore, it can be shown that any ARIMA model can be written in the compact form (5.4), meaning that the same principles as for ETS can be applied to ARIMA and that the two models can be united in one framework.

In a more general case, in order to develop state space ARIMA, we will use the multiple seasonal ARIMA, discussed in Section 8.2.3: $\begin{equation*} y_t \prod_{j=0}^n \Delta^{D_j} (B^{m_j}) \varphi^{P_j}(B^{m_j}) = \epsilon_t \prod_{j=0}^n \vartheta^{Q_j}(B^{m_j}) , \end{equation*}$ This model can be represented in an easier to digest form by expanding the polynomials on the left hand side of the equation and moving all the previous values to the right hand side and then expanding the MA polynomials: $$$y_t = \sum_{j=1}^K \eta_j y_{t-j} + \sum_{j=1}^K \theta_j \epsilon_{t-j} + \epsilon_t . \tag{9.3}$$$ Each element before the lagged $$y_{t-j}$$ can be called the parameter of polynomial. In our example with SARIMA(1,1,2)(0,1,0)$$_4$$ in the previous subsection they were: \begin{equation*} \begin{aligned} & \eta_1 = 1+\phi_1 \\ & \eta_2 = -\phi_1 \\ & \eta_3 = 0 \\ & \eta_4 = 1 \\ & \eta_5 = -(1+\phi_1) \\ & \eta_6 = \phi_1 \end{aligned} . \end{equation*} In the equation (9.3), $$K$$ is the order of the highest polynomial, calculated as $$K=\max\left(\sum_{j=0}^n (P_j + D_j)m_j, \sum_{j=0}^n Q_j m_j\right)$$. If, for example, the MA order is higher than the sum of ARI orders, then polynomials $$\eta_i=0$$ for $$i>\sum_{j=0}^n (P_j + D_j)m_j$$. The same holds for the opposite situation of the sum of ARI orders being higher than the MA orders, where $$\theta_i=0$$ for all $$i>\sum_{j=0}^n Q_j m_j$$. Based on this we could define states for each of the previous elements: $$$v_{i,t-i} = \eta_i y_{t-i} + \theta_i \epsilon_{t-i}, \tag{9.4}$$$ leading to the following model based on (9.4) and (9.3): $$$y_t = \sum_{j=1}^K v_{j,t-j} + \epsilon_t . \tag{9.5}$$$ This can be considered as a measurement equation of the state space ARIMA. Now if we consider the previous values of $$y_t$$ based on (9.5), for $$y_{t-i}$$, it will be equal to: $$$y_{t-i} = \sum_{j=1}^K v_{j,t-j-i} + \epsilon_{t-i} . \tag{9.6}$$$ The value (9.6) can be inserted into (9.4), in order to get the transition equation: $$$v_{i,t-i} = \eta_i \sum_{j=1}^K v_{j,t-j-i} + (\eta_i + \theta_i) \epsilon_{t-i}. \tag{9.7}$$$ This leads to the SSOE state space model based on (9.6) and (9.7): \begin{aligned} &{y}_{t} = \sum_{j=1}^K v_{j,t-j} + \epsilon_t \\ &v_{i,t} = \eta_i \sum_{j=1}^K v_{j,t-j} + (\eta_i + \theta_i) \epsilon_{t} \text{ for each } i=\{1, 2, \dots, K \} \end{aligned}, \tag{9.8} which can be formulated in the conventional form as a pure additive ADAM model (Section 5.1): \begin{equation*} \begin{aligned} &{y}_{t} = \mathbf{w}' \mathbf{v}_{t-\mathbf{l}} + \epsilon_t \\ &\mathbf{v}_{t} = \mathbf{F} \mathbf{v}_{t-\mathbf{l}} + \mathbf{g} \epsilon_t \end{aligned}, \end{equation*} with the following values for matrices: \begin{aligned} \mathbf{F} = \begin{pmatrix} \eta_1 & \eta_1 & \dots & \eta_1 \\ \eta_2 & \eta_2 & \dots & \eta_2 \\ \vdots & \vdots & \ddots & \vdots \\ \eta_K & \eta_K & \dots & \eta_K \end{pmatrix}, & \mathbf{w} = \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix}, \\ \mathbf{g} = \begin{pmatrix} \eta_1 + \theta_1 \\ \eta_2 + \theta_2 \\ \vdots \\ \eta_K + \theta_K \end{pmatrix}, & \mathbf{v}_{t} = \begin{pmatrix} v_{1,t} \\ v_{2,t} \\ \vdots \\ v_{K,t} \end{pmatrix}, & \mathbf{l} = \begin{pmatrix} 1 \\ 2 \\ \vdots \\ K \end{pmatrix} \end{aligned}. \tag{9.9} States in this model do not have any specific meaning, they just represent a combination of actual values and error terms, some pieces of ARIMA model. Furthermore, there are zero states in this model, corresponding to zero polynomials of ARI and MA. These can be dropped to make the model even more compact.

In general, state space ARIMA looks more complicated than the original one in the conventional form, but it brings the model to the same ground as ETS in ADAM (Section 5), making them directly comparable via information criteria and allowing to easily combine the two models, not to mention comparing ARIMA of any order with another ARIMA (e.g. with different orders of differencing) or introduce multiple seasonality and explanatory variables.

### 9.1.3 State space ARIMA with constant

If we want to add the constant (similar to how it was done in Section 8.1.4) to the model, we need to modify the equation (9.3): $$$y_t = \sum_{j=1}^K \eta_j y_{t-j} + \sum_{j=1}^K \theta_j \epsilon_{t-j} + a_0 + \epsilon_t . \tag{9.10}$$$ This then leads to the appearance of the new state: $$$v_{K+1,t} = a_0 , \tag{9.11}$$$ and modified measurement equation: $$$y_t = \sum_{j=1}^{K+1} v_{j,t-j} + \epsilon_t , \tag{9.12}$$$ with the following transition equation: \begin{aligned} & v_{i,t} = \eta_i \sum_{j=1}^{K+1} v_{j,t-j} + (\eta_i + \theta_i) \epsilon_{t} , \text{ for } i=\{1, 2, \dots, K\} \\ & v_{K+1, t} = v_{K+1, t-1} . \end{aligned} \tag{9.13} The state space equations (9.12) and (9.13) lead to the following matrices: \begin{aligned} \mathbf{F} = \begin{pmatrix} \eta_1 & \dots & \eta_1 & \eta_1 \\ \eta_2 & \dots & \eta_2 & \eta_2 \\ \vdots & \vdots & \ddots & \vdots \\ \eta_K & \dots & \eta_K & \eta_K \\ 0 & \dots & 0 & 1 \end{pmatrix}, & \mathbf{w} = \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \\ 1 \end{pmatrix}, \\ \mathbf{g} = \begin{pmatrix} \eta_1 + \theta_1 \\ \eta_2 + \theta_2 \\ \vdots \\ \eta_K + \theta_K \\ 0 \end{pmatrix}, & \mathbf{v}_{t} = \begin{pmatrix} v_{1,t} \\ v_{2,t} \\ \vdots \\ v_{K,t} \\ v_{K+1,t} \end{pmatrix}, & \mathbf{l} = \begin{pmatrix} 1 \\ 2 \\ \vdots \\ K \\ 1 \end{pmatrix} \end{aligned}. \tag{9.14}

Note that the constant term introduced in this model has a different meaning, depending on the order of differences of the model. For example, if $$D_j=0$$ for all $$j$$, it acts as an intercept, while for the $$d=1$$, it will act as a drift.

### 9.1.4 Multiplicative ARIMA

In order to connect ARIMA with ETS, we also need to define cases for multiplicative models. This implies that the error term $$(1+\epsilon_t)$$ is multiplied by components of the model. The state space ARIMA in this case is formulated using logarithms in the following way: \begin{aligned} &{y}_{t} = \exp \left( \sum_{j=1}^K \log v_{j,t-j} + \log(1+\epsilon_t) \right) \\ &\log v_{i,t} = \eta_i \sum_{j=1}^K \log v_{j,t-j} + (\eta_i + \theta_i) \log(1+\epsilon_t) \text{ for each } i=\{1, 2, \dots, K \} \end{aligned}. \tag{9.15} The model (9.15) can be written in the following more general form: \begin{aligned} &{y}_{t} = \exp \left( \mathbf{w}' \log \mathbf{v}_{t-\mathbf{l}} + \log(1+\epsilon_t) \right) \\ &\log \mathbf{v}_{t} = \mathbf{F} \log \mathbf{v}_{t-\mathbf{l}} + \mathbf{g} \log(1+\epsilon_t) \end{aligned}, \tag{9.16} where $$\mathbf{w}$$, $$\mathbf{F}$$, $$\mathbf{v}_t$$, $$\mathbf{g}$$ and $$\mathbf{l}$$ are defined as before for the pure additive ARIMA (Section 9.1), e.g. in equation (9.14). This model is equivalent to applying ARIMA to log-transformed data but at the same time shares some similarities with pure multiplicative ETS from Section 6.1. The main advantage of this formulation is that this model has analytical solutions for the conditional moments and has well-defined h steps ahead conditional distribution, which simplifies the work with the model in contrast with the pure multiplicative ETS.

To distinguish the additive ARIMA from the multiplicative one, we will use the notation “logARIMA” for the latter in this book, pointing out what such model is equivalent to (applying ARIMA to the log-transformed data).

Finally, it is worth mentioning that due to the logarithmic transform, the logARIMA model would be suitable for the cases of time-varying heteroscedasticity, similar to the multiplicative error ETS models.