**Open Review**. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

## 12.1 State space ARIMA

### 12.1.1 Additive ARIMA

In order to develop state space ARIMA, we will use the most general multiple seasonal ARIMA, discussed in the previous section: \[\begin{equation*} y_t \prod_{j=0}^n \Delta^{D_j} (B^{m_j}) \varphi^{P_j}(B^{m_j}) = \epsilon_t \prod_{j=0}^n \vartheta^{Q_j}(B^{m_j}) , \end{equation*}\] This model can be represented in an easier to digest form by expanding the polynomials on the left hand side of the equation and moving all the previous values to the right hand side and then expanding the MA polynomials: \[\begin{equation} y_t = \sum_{j=1}^K \eta_j y_{t-j} + \sum_{j=1}^K \theta_j \epsilon_{t-j} + \epsilon_t . \tag{12.1} \end{equation}\] Here \(K\) is the order of the highest polynomial, calculated as \(K=\max\left(\sum_{j=0}^n (P_j + D_j)m_j, \sum_{j=0}^n Q_j m_j\right)\). If, for example, the MA order is higher than the sum of ARI orders, then polynomials \(\eta_i=0\) for \(i>\sum_{j=0}^n (P_j + D_j)m_j\). The same property holds for the opposite situation of the sum of ARI orders being higher than the MA orders. Based on this we could define states for each of the previous elements: \[\begin{equation} v_{i,t-i} = \eta_i y_{t-i} + \theta_i \epsilon_{t-i}, \tag{12.2} \end{equation}\] leading to the following model based on (12.2) and (12.1): \[\begin{equation} y_t = \sum_{j=1}^K v_{j,t-j} + \epsilon_t . \tag{12.3} \end{equation}\] This can be considered as a measurement equation of the state space ARIMA. Now if we consider the previous values of \(y_t\) based on (12.3), for \(y_{t-i}\), it will be equal to: \[\begin{equation} y_{t-i} = \sum_{j=1}^K v_{j,t-j-i} + \epsilon_{t-i} . \tag{12.4} \end{equation}\] The value (12.4) can be inserted into (12.2), in order to get the transition equation: \[\begin{equation} v_{i,t-i} = \eta_i \sum_{j=1}^K v_{j,t-j-i} + (\eta_i + \theta_i) \epsilon_{t-i}. \tag{12.5} \end{equation}\] This leads to the SSOE state space model based on (12.4) and (12.5): \[\begin{equation} \begin{aligned} &{y}_{t} = \sum_{j=1}^K v_{j,t-j} + \epsilon_t \\ &v_{i,t} = \eta_i \sum_{j=1}^K v_{j,t-j} + (\eta_i + \theta_i) \epsilon_{t} \text{ for each } i=\{1, 2, \dots, K \} \end{aligned}, \tag{12.6} \end{equation}\] which can be formulated in the conventional form as a pure additive model: \[\begin{equation*} \begin{aligned} &{y}_{t} = \mathbf{w}' \mathbf{v}_{t-\boldsymbol{l}} + \epsilon_t \\ &\mathbf{v}_{t} = \mathbf{F} \mathbf{v}_{t-\boldsymbol{l}} + \mathbf{g} \epsilon_t \end{aligned}, \end{equation*}\] with the following values for matrices: \[\begin{equation} \begin{aligned} \mathbf{F} = \begin{pmatrix} \eta_1 & \eta_1 & \dots & \eta_1 \\ \eta_2 & \eta_2 & \dots & \eta_2 \\ \vdots & \vdots & \ddots & \vdots \\ \eta_K & \eta_K & \dots & \eta_K \end{pmatrix}, & \mathbf{w} = \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix}, \\ \mathbf{g} = \begin{pmatrix} \eta_1 + \theta_1 \\ \eta_2 + \theta_2 \\ \vdots \\ \eta_K + \theta_K \end{pmatrix}, & \mathbf{v}_{t} = \begin{pmatrix} v_{1,t} \\ v_{2,t} \\ \vdots \\ v_{K,t} \end{pmatrix}, & \boldsymbol{l} = \begin{pmatrix} 1 \\ 2 \\ \vdots \\ K \end{pmatrix} \end{aligned}. \tag{12.7} \end{equation}\] States in this model do not have any specific meaning, they just represent a combination of actual values and error terms, some portion of ARIMA model. Furthermore, there are zero states in this model, corresponding to zero polynomials of ARI and MA. These can be dropped to make the model even more compact.

### 12.1.2 An example

In order to better understand what the state space model (12.6) implies, we consider an example of SARIMA(1,1,2)(0,1,0)\(_4\): \[\begin{equation*} {y}_{t} (1- \phi_1 B)(1-B)(1-B^4) = \epsilon_t (1 + \theta_1 B + \theta_2 B^2), \end{equation*}\] which can be rewritten in the expanded form: \[\begin{equation*} {y}_{t} (1-\phi_1 B - B + \phi_1 B^2 - B^4 +\phi_1 B^5 + B^5 - \phi_1 B^6) = \epsilon_t (1 + \theta_1 B + \theta_2 B^2), \end{equation*}\] or after moving the previous values to the right hand side: \[\begin{equation*} {y}_{t} = (1+\phi_1) {y}_{t-1} - \phi_1 {y}_{t-2} + {y}_{t-4} - (1+\phi_1) {y}_{t-5} + \phi_1 {y}_{t-6} + \theta_1 \epsilon_{t-1} + \theta_2 \epsilon_{t-2} + \epsilon_t . \end{equation*}\] The polynomials in this case can be written as: \[\begin{equation*} \begin{aligned} & \eta_1 = 1+\phi_1 \\ & \eta_2 = -\phi_1 \\ & \eta_3 = 0 \\ & \eta_4 = 1 \\ & \eta_5 = - (1+\phi_1) \\ & \eta_6 = \phi_1 \end{aligned} , \end{equation*}\] leading to 6 states, one of which can be dropped (the third one, for which both \(\eta_3=0\) and \(\theta_3=0\)). The state space ARIMA can then be written as: \[\begin{equation*} \begin{aligned} &{y}_{t} = \sum_{j=1,2,4,5,6} v_{j,t-j} + \epsilon_t \\ & v_{1,t} = (1+\phi_1) \sum_{j=1}^6 v_{j,t-j} + (1+\phi_1+\theta_1) \epsilon_t \\ & v_{2,t} = -\phi_1 \sum_{j=1}^6 v_{j,t-j} + (-\phi_1+\theta_2) \epsilon_t \\ & v_{4,t} = \sum_{j=1}^6 v_{j,t-j} + \epsilon_t \\ & v_{5,t} = -(1+\phi_1) \sum_{j=1}^6 v_{j,t-j} -(1+\phi_1) \epsilon_t \\ & v_{6,t} = \phi_1 \sum_{j=1}^6 v_{j,t-j} + \phi_1 \epsilon_t \end{aligned} . \end{equation*}\] This model looks more complicated than the original ARIMA in the conventional form, but it bring the model to the same ground as ETS in ADAM, making them directly comparable via information criteria and allowing to easily combine the two models, not to mention compare ARIMA of any order with another ARIMA (e.g. with different orders of differencing).

### 12.1.3 State space ARIMA with constant

If we want to add the constant to the model, we need to modify the equation (12.1): \[\begin{equation} y_t = \sum_{j=1}^K \eta_j y_{t-j} + \sum_{j=1}^K \theta_j \epsilon_{t-j} + a_0 + \epsilon_t . \tag{12.8} \end{equation}\] This then leads to the appearance of the new state: \[\begin{equation} v_{K+1,t} = a_0 , \tag{12.9} \end{equation}\] which leads to the modified measurement equation: \[\begin{equation} y_t = \sum_{j=1}^{K+1} v_{j,t-j} + \epsilon_t , \tag{12.10} \end{equation}\] and the modified transition states: \[\begin{equation} \begin{aligned} & v_{i,t} = \eta_i \sum_{j=1}^{K+1} v_{j,t-j} + (\eta_i + \theta_i) \epsilon_{t} , \text{ for } i=\{1, 2, \dots, K\} \\ & v_{K+1, t} = v_{K+1, t-1} . \end{aligned} \tag{12.11} \end{equation}\] The state space equations (12.10) and (12.11) lead to the following matrices: \[\begin{equation} \begin{aligned} \mathbf{F} = \begin{pmatrix} \eta_1 & \dots & \eta_1 & \eta_1 \\ \eta_2 & \dots & \eta_2 & \eta_2 \\ \vdots & \vdots & \ddots & \vdots \\ \eta_K & \dots & \eta_K & \eta_K \\ 0 & \dots & 0 & 1 \end{pmatrix}, & \mathbf{w} = \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \\ 1 \end{pmatrix}, \\ \mathbf{g} = \begin{pmatrix} \eta_1 + \theta_1 \\ \eta_2 + \theta_2 \\ \vdots \\ \eta_K + \theta_K \\ 0 \end{pmatrix}, & \mathbf{v}_{t} = \begin{pmatrix} v_{1,t} \\ v_{2,t} \\ \vdots \\ v_{K,t} \\ v_{K+1,t} \end{pmatrix}, & \boldsymbol{l} = \begin{pmatrix} 1 \\ 2 \\ \vdots \\ K \\ 1 \end{pmatrix} \end{aligned}. \tag{12.12} \end{equation}\]

Note that the constant term introduced in this model has different meaning, depending on the differences of the model. For example, if all \(D_j=0\), then it acts as an intercept, while for the \(d=1\), it will act as a drift.

### 12.1.4 Multiplicative ARIMA

In order to connect ARIMA with ETS, we also need to define cases for multiplicative models. This implies that the error term \((1+\epsilon_t)\) is multiplied by components of the model. The state space ARIMA in this case is formulated using logarithms in the following way: \[\begin{equation} \begin{aligned} &{y}_{t} = \exp \left( \sum_{j=1}^K \log v_{j,t-j} + \log(1+\epsilon_t) \right) \\ &\log v_{i,t} = \eta_i \sum_{j=1}^K \log v_{j,t-j} + (\eta_i + \theta_i) \log(1+\epsilon_t) \text{ for each } i=\{1, 2, \dots, K \} \end{aligned}. \tag{12.13} \end{equation}\] The model (12.13) can be written in the following more general form: \[\begin{equation} \begin{aligned} &{y}_{t} = \exp \left( \mathbf{w}' \log \mathbf{v}_{t-\boldsymbol{l}} + \log(1+\epsilon_t) \right) \\ &\log \mathbf{v}_{t} = \mathbf{F} \log \mathbf{v}_{t-\boldsymbol{l}} + \mathbf{g} \log(1+\epsilon_t) \end{aligned}, \tag{12.14} \end{equation}\] where \(\mathbf{w}\), \(\mathbf{F}\), \(\mathbf{v}_t\), \(\mathbf{g}\) and \(\boldsymbol{l}\) are defined as before for the additive ARIMA, e.g. in equation (12.12). This model is equivalent to applying ARIMA to log-transformed data, but at the same time shares some similarities with pure multiplicative ETS. The main advantage of this formulation is that this model has analytical solutions for the conditional moments and has well defined h steps ahead distributions, which simplifies the work with it in contrast with the pure multiplicative ETS models.

In order to distinguish the additive ARIMA from the multiplicative one, we will use the notation “logARIMA” for the latter in this book, pointing out at what such model is equivalent to (applying ARIMA to the log-transformed data).