## 5.3 Conditional expectation and variance

Now, why is the recursion (5.9) important? This is because we can take the expectation and variance of (5.9) conditional on the values of the state vector $$\mathbf{v}_{t}$$ on the observation $$t$$ (assuming that the error term is homoscedastic, uncorrelated and has the expectation of zero) in order to get: \begin{aligned} \mu_{y,t+h} = \text{E}(y_{t+h}|t) = & \sum_{i=1}^d \left(\mathbf{w}_{m_i}^\prime \mathbf{F}_{m_i}^{\lceil\frac{h}{m_i}\rceil-1} \right) \mathbf{v}_{t} \\ \sigma^2_{h} = \text{V}(y_{t+h}|t) = & \left( \sum_{i=1}^d \left(\mathbf{w}_{m_i}^\prime \sum_{j=1}^{\lceil\frac{h}{m_i}\rceil-1} \mathbf{F}_{m_i}^{j-1} \mathbf{g}_{m_i} \mathbf{g}^\prime_{m_i} (\mathbf{F}_{m_i}^\prime)^{j-1} \mathbf{w}_{m_i} \right) + 1 \right) \sigma^2 \end{aligned}, \tag{5.11} where $$\sigma^2$$ is the variance of the error term. These two formulae are cumbersome, but they give the analytical solutions to the two moments. Having obtained both of them, we can construct prediction intervals, assuming, for example, that the error term follows normal distribution (see Section 18.3 for details): $$$y_{t+h} \in \left( \text{E}(y_{t+h}|t) + z_{\frac{\alpha}{2}} \sqrt{\text{V}(y_{t+h}|t)}, \text{E}(y_{t+h}|t) + z_{1-\frac{\alpha}{2}} \sqrt{\text{V}(y_{t+h}|t)} \right), \tag{5.12}$$$ where $$z_{\frac{\alpha}{2}}$$ is quantile of standardised normal distribution for the level $$\alpha$$. When it comes to other distributions (see Section 5.5), in order to get the conditional h steps ahead scale parameter, we can first calculate the variance using (5.11) and then using the relation between the scale and the variance for the specific distribution (see discussion in Chapter 3 of I. Svetunkov, 2022a) to get the necessary value.

### 5.3.1 Example with ETS(A,N,N)

For example, for the ETS(A,N,N) model, discussed above, we get: \begin{aligned} \text{E}(y_{t+h}|t) = & \mathbf{w}_{1}^\prime \mathbf{F}_{1}^{h-1} \mathbf{v}_{t} \\ \text{V}(y_{t+h}|t) = & \left(\mathbf{w}_{1}^\prime \sum_{j=1}^{h-1} \mathbf{F}_{1}^{j-1} \mathbf{g}_{1} \mathbf{g}^\prime_{1} (\mathbf{F}_{1}^\prime)^{j-1} \mathbf{w}_{1} + 1 \right) \sigma^2 \end{aligned}, \tag{5.13} or by substituting $$\mathbf{F}=1$$, $$\mathbf{w}=1$$, $$\mathbf{g}=\alpha$$ and $$\mathbf{v}_t=l_t$$: \begin{aligned} \mu_{y,t+h} = & l_{t} \\ \sigma^2_{h} = & \left((h-1) \alpha^2 + 1 \right) \sigma^2 \end{aligned}, \tag{5.14} which is the same conditional expectation and variance as in the Section 4.2 and in the Hyndman et al. (2008) monograph.

### References

• Hyndman, R.J., Koehler, A.B., Ord, J.K., Snyder, R.D., 2008. Forecasting with Exponential Smoothing.. Springer Berlin Heidelberg.

• Svetunkov, I., 2022a. Statistics for business analytics. https://openforecast.org/sba/ (version: 31.03.2022)