5.2 Recursive relation

One of the useful representations of the pure additive model (5.5) is its recursive form, which can be used for further inference.

First, when we produce forecast for \(h\) steps ahead, it is important to understand what the actual value \(h\) steps ahead might be, given the information on observation \(t\) (i.e. in-sample values). In order to get to it, we first consider the model for the actual value \(y_{t+h}\): \[\begin{equation} \begin{aligned} & {y}_{t+h} = \mathbf{w}^\prime \mathbf{v}_{t+h-\boldsymbol{l}} + \epsilon_{t+h} \\ & \mathbf{v}_{t+h} = \mathbf{F} \mathbf{v}_{t+h-\boldsymbol{l}} + \mathbf{g} \epsilon_{t+h} \end{aligned}, \tag{5.7} \end{equation}\] where \(\mathbf{v}_{t+h-\boldsymbol{l}}\) is the vector of previous states, given the lagged values \(\boldsymbol{l}\). Now we need to split the measurement and persistence vectors together with the transition matrix into parts for the same lags of components, leading to the following equation: \[\begin{equation} \begin{aligned} & {y}_{t+h} = (\mathbf{w}_{m_1}^\prime + \mathbf{w}_{m_2}^\prime + \dots + \mathbf{w}_{m_d}^\prime) \mathbf{v}_{t+h-\boldsymbol{l}} + \epsilon_{t+h} \\ & \mathbf{v}_{t+h} = (\mathbf{F}_{m_1} + \mathbf{F}_{m_2} + \dots + \mathbf{F}_{m_d}) \mathbf{v}_{t+h-\boldsymbol{l}} + (\mathbf{g}_{m_1} + \mathbf{g}_{m_2} + \dots \mathbf{g}_{m_d}) \epsilon_{t+h} \end{aligned}, \tag{5.8} \end{equation}\] where \(m_1, m_2, \dots, m_d\) are the distinct lags of the model. So, for example, in case of ETS(A,A,A) model on quarterly data (periodicity is equal to four), \(m_1=1\), \(m_2=4\), leading to \(\mathbf{F}_{1} = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix}\) and \(\mathbf{F}_{4} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}\), where the split of the transition matrix is done column-wise. This split of matrices and vectors into distinct sub matrices and subvectors is needed in order to get the correct recursion and obtain the correct conditional h-steps ahead expectation and variance.

By substituting the values in the transition equation of (5.8) with their previous values until we reach \(t\), we get: \[\begin{equation} \begin{aligned} \mathbf{v}_{t+h-\boldsymbol{l}} = & \mathbf{F}_{m_1}^{\lceil\frac{h}{m_1}\rceil-1} \mathbf{v}_{t} + \sum_{j=1}^{\lceil\frac{h}{m_1}\rceil-1} \mathbf{F}_{m_1}^{j-1} \mathbf{g}_{m_1} \epsilon_{t+m_1\lceil\frac{h}{m_1}\rceil-j} + \\ & \mathbf{F}_{m_2}^{\lceil\frac{h}{m_2}\rceil-1} \mathbf{v}_{t} + \sum_{j=1}^{\lceil\frac{h}{m_2}\rceil-1} \mathbf{F}_{m_2}^{j-1} \mathbf{g}_{m_2} \epsilon_{t+m_2\lceil\frac{h}{m_2}\rceil-j} + \\ & \dots + \\ & \mathbf{F}_{m_d}^{\lceil\frac{h}{m_d}\rceil-1} \mathbf{v}_{t} + \sum_{j=1}^{\lceil\frac{h}{m_d}\rceil-1} \mathbf{F}_{m_d}^{j-1} \mathbf{g}_{m_d} \epsilon_{t+m_d\lceil\frac{h}{m_d}\rceil-j} . \end{aligned} \tag{5.9} \end{equation}\] Inserting (5.9) in the measurement equation of (5.8), we get: \[\begin{equation} \begin{aligned} y_{t+h} = & \mathbf{w}_{m_1}^\prime \mathbf{F}_{m_1}^{\lceil\frac{h}{m_1}\rceil-1} \mathbf{v}_{t} + \mathbf{w}_{m_1}^\prime \sum_{j=1}^{\lceil\frac{h}{m_1}\rceil-1} \mathbf{F}_{m_1}^{j-1} \mathbf{g}_{m_1} \epsilon_{t+m_1\lceil\frac{h}{m_1}\rceil-j} + \\ & \mathbf{w}_{m_2}^\prime \mathbf{F}_{m_2}^{\lceil\frac{h}{m_2}\rceil-1} \mathbf{v}_{t} + \mathbf{w}_{m_2}^\prime \sum_{j=1}^{\lceil\frac{h}{m_2}\rceil-1} \mathbf{F}_{m_2}^{j-1} \mathbf{g}_{m_2} \epsilon_{t+m_2\lceil\frac{h}{m_2}\rceil-j} + \\ & \dots + \\ & \mathbf{w}_{m_d}^\prime \mathbf{F}_{m_d}^{\lceil\frac{h}{m_d}\rceil-1} \mathbf{v}_{t} + \mathbf{w}_{m_d}^\prime \sum_{j=1}^{\lceil\frac{h}{m_d}\rceil-1} \mathbf{F}_{m_d}^{j-1} \mathbf{g}_{m_d} \epsilon_{t+m_d\lceil\frac{h}{m_d}\rceil-j} + \\ & \epsilon_{t+h} . \end{aligned} \tag{5.10} \end{equation}\] This recursion shows how the actual value appears based on the states on observation \(t\), values of transition matrix and measurement and persistence vectors and on the error term for the holdout sample. The latter is typically not known but we can usually estimate its moments (e.g. \(\mathrm{E}(\epsilon_t)=0\) and \(\mathrm{V}(\epsilon_t)=\sigma^2\)), which will help us in getting conditional moments for the actual value \(y_{t+h}\).

Substituting the specific values of \(m_1, m_2, \dots, m_d\) in (5.10) will simplify the equation and make it easier to understand. For example, for ETS(A,N,N), \(m_1=1\) and all the other lags are equal to zero, so the recursion (5.10) simplifies to: \[\begin{equation} y_{t+h} = \mathbf{w}_{1}^\prime \mathbf{F}_{1}^{h-1} \mathbf{v}_{t} + \mathbf{w}_{1}^\prime \sum_{j=1}^{h-1} \mathbf{F}_{1}^{j-1} \mathbf{g}_{1} \epsilon_{t+h-j} + \epsilon_{t+h} , \tag{5.11} \end{equation}\] which is the recursion obtained by Hyndman et al. (2008) on page 103.


• Hyndman, R.J., Koehler, A.B., Ord, J.K., Snyder, R.D., 2008. Forecasting with Exponential Smoothing. Springer Berlin Heidelberg.