This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

6.3 Conditional expectation and variance

Now, why is the recursion (6.8) important? This is because we can take the expectation and variance of (6.8) conditional on the values of the state vector \(\mathbf{v}_{t}\) on the observation \(t\) (assuming that the error term is homoscedastic, uncorrelated and has the expectation of zero) in order to get: \[\begin{equation} \begin{aligned} \mu_{y,t+h} = \text{E}(y_{t+h}|t) = & \sum_{i=1}^d \left(\mathbf{w}_{m_i}' \mathbf{F}_{m_i}^{\lceil\frac{h}{m_i}\rceil-1} \right) \mathbf{v}_{t} \\ \text{V}(y_{t+h}|t) = & \left( \sum_{i=1}^d \left(\mathbf{w}_{m_i}' \sum_{j=1}^{\lceil\frac{h}{m_i}\rceil-1} \mathbf{F}_{m_i}^{j-1} \mathbf{g}_{m_i} \mathbf{g}'_{m_i} (\mathbf{F}_{m_i}')^{j-1} \mathbf{w}_{m_i} \right) + 1 \right) \sigma^2 \end{aligned}. \tag{6.10} \end{equation}\] These two formulae are cumbersome, but they give the analytical solutions to the two statistics. Having obtained both of them, we can construct prediction intervals, assuming, for example, that the error term follows normal distribution: \[\begin{equation} y_{t+h} \in \text{E}(y_{t+h}|t) \pm z_{\frac{\alpha}{2}} \sqrt{\text{V}(y_{t+h}|t)} , \tag{6.11} \end{equation}\]

where \(z_{\frac{\alpha}{2}}\) is quantile of standardised normal distribution for the level \(\alpha\).

6.3.1 Example with ETS(A,N,N)

For example, for the ETS(A,N,N) model, discussed above, we get: \[\begin{equation} \begin{aligned} \text{E}(y_{t+h}|t) = & \mathbf{w}_{1}' \mathbf{F}_{1}^{h-1} \mathbf{v}_{t} \\ \text{V}(y_{t+h}|t) = & \left(\mathbf{w}_{1}' \sum_{j=1}^{h-1} \mathbf{F}_{1}^{j-1} \mathbf{g}_{1} \mathbf{g}'_{1} (\mathbf{F}_{1}')^{j-1} \mathbf{w}_{1} + 1 \right) \sigma^2 \end{aligned}, \tag{6.12} \end{equation}\] or by substituting \(\mathbf{F}=1\), \(\mathbf{w}=1\), \(\mathbf{g}=\alpha\) and \(\mathbf{v}_t=l_t\): \[\begin{equation} \begin{aligned} \mu_{y,t+h} = \text{E}(y_{t+h}|t) = & l_{t} \\ \text{V}(y_{t+h}|t) = & \left((h-1) \alpha^2 + 1 \right) \sigma^2 \end{aligned}, \tag{6.13} \end{equation}\]

which is the same conditional expectation and variance as in the ETS Taxonomy section and in the Hyndman et al. (2008) textbook.


Hyndman, Rob J., Anne B. Koehler, J. Keith Ord, and Ralph D. Snyder. 2008. Forecasting with Exponential Smoothing. Springer Berlin Heidelberg.