6.3 Conditional expectation and variance
Now, why is the recursion (6.9) important? This is because we can take the expectation and variance of (6.9) conditional on the values of the state vector \(\mathbf{v}_{t}\) on the observation \(t\) (assuming that the error term is homoscedastic, uncorrelated and has the expectation of zero) in order to get: \[\begin{equation} \begin{aligned} \mu_{y,t+h} = \text{E}(y_{t+h}|t) = & \sum_{i=1}^d \left(\mathbf{w}_{m_i}' \mathbf{F}_{m_i}^{\lceil\frac{h}{m_i}\rceil-1} \right) \mathbf{v}_{t} \\ \sigma^2_{h} = \text{V}(y_{t+h}|t) = & \left( \sum_{i=1}^d \left(\mathbf{w}_{m_i}' \sum_{j=1}^{\lceil\frac{h}{m_i}\rceil-1} \mathbf{F}_{m_i}^{j-1} \mathbf{g}_{m_i} \mathbf{g}'_{m_i} (\mathbf{F}_{m_i}')^{j-1} \mathbf{w}_{m_i} \right) + 1 \right) \sigma^2 \end{aligned}, \tag{6.11} \end{equation}\] where \(\sigma^2\) is the variance of the error term. These two formulae are cumbersome, but they give the analytical solutions to the two statistics. Having obtained both of them, we can construct prediction intervals, assuming, for example, that the error term follows normal distribution: \[\begin{equation} y_{t+h} \in \text{E}(y_{t+h}|t) \pm z_{\frac{\alpha}{2}} \sqrt{\text{V}(y_{t+h}|t)} , \tag{6.12} \end{equation}\] where \(z_{\frac{\alpha}{2}}\) is quantile of standardised normal distribution for the level \(\alpha\). When it comes to other distributions, in order to get the conditional h steps ahead scale parameter, we can first calculate the variance using (6.11) and then using the relation between the scale and the variance for the specific distribution to get the necessary value.
6.3.1 Example with ETS(A,N,N)
For example, for the ETS(A,N,N) model, discussed above, we get: \[\begin{equation} \begin{aligned} \text{E}(y_{t+h}|t) = & \mathbf{w}_{1}' \mathbf{F}_{1}^{h-1} \mathbf{v}_{t} \\ \text{V}(y_{t+h}|t) = & \left(\mathbf{w}_{1}' \sum_{j=1}^{h-1} \mathbf{F}_{1}^{j-1} \mathbf{g}_{1} \mathbf{g}'_{1} (\mathbf{F}_{1}')^{j-1} \mathbf{w}_{1} + 1 \right) \sigma^2 \end{aligned}, \tag{6.13} \end{equation}\] or by substituting \(\mathbf{F}=1\), \(\mathbf{w}=1\), \(\mathbf{g}=\alpha\) and \(\mathbf{v}_t=l_t\): \[\begin{equation} \begin{aligned} \mu_{y,t+h} = & l_{t} \\ \sigma^2_{h} = & \left((h-1) \alpha^2 + 1 \right) \sigma^2 \end{aligned}, \tag{6.14} \end{equation}\] which is the same conditional expectation and variance as in the ETS Taxonomy section and in the Rob J. Hyndman et al. (2008) textbook.
References
Hyndman, Rob J., Anne B. Koehler, J. Keith Ord, and Ralph D. Snyder. 2008. Forecasting with Exponential Smoothing. Springer Berlin Heidelberg.