**Open Review**. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

## 6.4 Stability and forecastability conditions

Another important aspect of the pure additive model (6.2) is the restriction on the smoothing parameters. This is related to the stability and forecastability conditions of the model. The **stability** implies that the weights for observations decay, guaranteeing that the newer ones will have higher weights than the older ones. If this condition holds, then the model behaves “steadily”, forgetting eventually the past values. The **forecastability** does not guarantee that the weights will decay, but it guarantees that the initial value of the state vector will have a constant impact on forecasts, i.e. will not increase in weight with the increase of the forecast horizon. An example of the non-stable, but forecastable model is ETS(A,N,N) with \(\alpha=0\). In this case it reverts to the global level model, where the initial value impacts the forecast, but does not change with the increase of the forecast horizon.

In order to obtain both conditions, we need to use a reduced form of ETS by inserting the measurement equation in the transition equation via \(\epsilon_t= {y}_{t} - \mathbf{w}' \mathbf{v}_{t-\mathbf{l}}\): \[\begin{equation} \begin{aligned} \mathbf{v}_{t} = &\mathbf{F} \mathbf{v}_{t-\mathbf{l}} + \mathbf{g} \left({y}_{t} - \mathbf{w}' \mathbf{v}_{t-\mathbf{l}} \right)\\ = & \left(\mathbf{F} - \mathbf{g}\mathbf{w}' \right) \mathbf{v}_{t-\mathbf{l}} + \mathbf{g} {y}_{t} \\ \end{aligned}. \tag{6.15} \end{equation}\] The matrix \(\mathbf{D}=\mathbf{F} - \mathbf{g}\mathbf{w}'\) is called the discount matrix and it shows how the weights diminish over time. It is the main part of the model that determines, whether the model will be stable / forecastable or not.

### 6.4.1 Example with ETS(A,N,N)

In order to better understand what we plan to discuss later, we can take an example of ETS(A,N,N) model, for which \(\mathbf{F}=1\), \(\mathbf{w}=1\), \(\mathbf{g}=\alpha\), \(\mathbf{v}_t=l_t\) and \(\mathbf{l}=1\). Inserting these values into (6.15), we get:
\[\begin{equation}
\begin{aligned}
l_{t} = & \left(1 - \alpha \right) {l}_{t-1} + \alpha {y}_{t},
\end{aligned}.
\tag{6.16}
\end{equation}\]
which corresponds to the formula of Simple Exponential Smoothing (5.1). The discount matrix in this case is \(\mathbf{D}=1-\alpha\). If we now substitute the values for the level on the right hand side of the equation (6.16) by the previous values of the level, we will obtain the recursion that we have already discussed in a previous section, but now in terms of the “true” components and parameters:
\[\begin{equation}
\begin{aligned}
l_{t} = & {\alpha} \sum_{j=0}^{t-1} (1 -{\alpha})^j {y}_{t-j} + (1 -{\alpha})^t l_0,
\end{aligned}.
\tag{6.17}
\end{equation}\]
The *stability* condition for ETS(A,N,N) is that the discount matrix \(\mathbf{D}=1-\alpha\) is less than or equal to one by absolute value. This way the weights will decay in time because of the exponentiation in (6.17). This condition is satisfied, when \(\alpha \in(0, 2)\). As for the *forecastability* condition, in this case it implies that \(\lim\limits_{t\rightarrow\infty}(1 -{\alpha})^t = \text{const}\). This is achievable, for example, when \(\alpha=0\), but is violated, when \(\alpha<0\) or \(\alpha\geq 2\). So, the bounds for the smoothing parameters in the ETS(A,N,N) model, guaranteeing the forecastability of the model (i.e. making it useful) are:
\[\begin{equation}
\alpha \in [0, 2) .
\tag{6.18}
\end{equation}\]

### 6.4.2 Comming back to the general case

In general, the logic is the same as with ETS(A,N,N), but it implies the usage of linear algebra. Due to our lagged formulation, the recursion becomes more complicated:
\[\begin{equation}
\begin{aligned}
\mathbf{v}_{t} = & \mathbf{D}_{m_1}^{\lceil\frac{t}{m_1}\rceil} \mathbf{v}_{0} + \sum_{j=0}^{\lceil\frac{t}{m_1}\rceil-1} \mathbf{D}_{m_1}^{j} y_{t - j m_1} + \\
& \mathbf{D}_{m_2}^{\lceil\frac{t}{m_2}\rceil} \mathbf{v}_{0} + \sum_{j=0}^{\lceil\frac{t}{m_2}\rceil-1} \mathbf{D}_{m_2}^j y_{t - j m_2} + \\
& \dots + \\
& \mathbf{D}_{m_d}^{\lceil\frac{t}{m_d}\rceil} \mathbf{v}_{0} + \sum_{j=0}^{\lceil\frac{t}{m_d}\rceil-1} \mathbf{D}_{m_d}^j y_{t - j m_d}
\end{aligned},
\tag{6.9}
\end{equation}\]
where \(\mathbf{D}_{m_i} = \mathbf{F}_{m_i} - \mathbf{g}_{m_i} \mathbf{w}_{m_i}'\) is the discount matrix for each lagged part of the model. The stability condition in this case is that the absolute values of all the non-zero eigenvalues of the discount matrices \(\mathbf{D}_{m_i}\) are lower than one. This condition can be checked at the model construction stage, ensuring that the selected parameters guarantee the stability of the model. As for the forecastability, the idea is that the initial value of the state vector should not have an increasing impact on the last observed value, which is obtained by inserting (6.9) in the measurement equation:
\[\begin{equation}
\begin{aligned}
y_t = & \mathbf{w}_{m_1}' \mathbf{D}_{m_1}^{\lceil\frac{t-1}{m_1}\rceil} \mathbf{v}_{0} + \mathbf{w}_{m_1}' \sum_{j=0}^{\lceil\frac{t-1}{m_1}\rceil-1} \mathbf{D}_{m_1}^{j} y_{t-1 - j m_1} + \\
& \mathbf{w}_{m_2}' \mathbf{D}_{m_2}^{\lceil\frac{t-1}{m_2}\rceil} \mathbf{v}_{0} + \mathbf{w}_{m_2}' \sum_{j=0}^{\lceil\frac{t-1}{m_2}\rceil-1} \mathbf{D}_{m_2}^j y_{t-1 - j m_2} + \\
& \dots + \\
& \mathbf{w}_{m_d}' \mathbf{D}_{m_d}^{\lceil\frac{t-1}{m_d}\rceil} \mathbf{v}_{0} + \mathbf{w}_{m_d}' \sum_{j=0}^{\lceil\frac{t-1}{m_d}\rceil-1} \mathbf{D}_{m_d}^j y_{t-1 - j m_d}
\end{aligned},
\tag{6.19}
\end{equation}\]
and analysing the impact of \(\mathbf{v}_0\) on the actual value \(y_t\). In our case **forecastability** condition implies that:
\[\begin{equation}
\lim\limits_{t\rightarrow\infty}\left(\mathbf{w}_{m_i}'\mathbf{D}_{m_i}^{\lceil\frac{t-1}{m_i}\rceil} \mathbf{v}_{0}\right) = \text{const for all } i=1, \dots, d.
\tag{6.19}
\end{equation}\]