## 6.3 The problem with moments in pure multiplicative ETS

The recursion (6.7) obtained in the previous subsection shows how the previous values influence the logarithms of states. While it is possible to calculate the expectation of the logarithm of the variable $$y_{t+h}$$, in general, this does not allow deriving the expectation of the variable in the original scale. This is because of the convolution of terms $$\log(\mathbf{1}_k + \mathbf{g}_{m_i} \epsilon_{t+j})$$ for different $$j$$. To better understand this issue, we consider this persistence part of the equation for the ETS(M,N,N) model: $\begin{equation} \log(1+\alpha\epsilon_t) = \log(1-\alpha + \alpha(1+\epsilon_t)). \tag{6.10} \end{equation}$ Whatever we assume about the distribution of the variable $$(1+\epsilon_t)$$, the distribution of (6.10) will be more complicated. For example, if we assume that $$(1+\epsilon_t)\sim\mathrm{log}\mathcal{N}(0,\sigma^2)$$, then the distribution of (6.10) is something like exp three-parameter log-normal distribution . The convolution of (6.10) for different $$t$$ does not follow a known distribution, so it is not possible to calculate the conditional expectation and variance based on (6.7). Similar issues arrise if we assume any other distribution. The problem is worsened in case of multiplicative trend and / or multiplicative seasonality models, because then the recursion (6.7) contains several errors on the same observation (e.g. $$\log(1+\alpha\epsilon_t)$$ and $$\log(1+\beta\epsilon_t)$$).

The only way to derive the conditional expectation and variance for the pure multiplicative models is to use the formulae from tables 4.1 and 4.2 in Section 4.2 and manually derive the values in the original scale. This works well only for the ETS(M,N,N) model, for which it is possible to take conditional expectation and variance of the recursion (6.9) to obtain: \begin{equation} \begin{aligned} \mu_{y,t+h} = \mathrm{E}(y_{t+h}|t) = & l_{t} \\ \mathrm{V}(y_{t+h}|t) = & l_{t}^2 \left( \left(1+ \alpha^2 \sigma^2 \right)^{h-1} (1 + \sigma^2) -1 \right), \end{aligned} \tag{6.11} \end{equation} where $$\sigma^2$$ is the variance of the error term. For the other models, the conditional moments do not have a general closed forms because of the product of $$\log(1+\alpha\epsilon_t)$$, $$\log(1+\beta\epsilon_t)$$ and $$\log(1+\gamma\epsilon_t)$$. It is still possible to derive the moments for special cases of $$h$$, but this is a tedious process. In order to see that, we demonstrate here how the recursion looks for ETS(M,Md,M) model: \begin{equation} \begin{aligned} & y_{t+h} = l_{t+h-1} b_{t+h-1}^\phi s_{t+h-m} \left(1 + \epsilon_{t+h} \right) = \\ & l_{t} b_{t}^{\sum_{j=1}^h{\phi^j}} s_{t+h-m\lceil\frac{h}{m}\rceil} \prod_{j=1}^{h-1} \left( (1 + \alpha \epsilon_{t+j}) \prod_{i=1}^{j} (1 + \beta \epsilon_{t+i})^{\phi^{j-i}} \right) \prod_{j=1}^{\lceil\frac{h}{m}\rceil} \left(1 + \gamma \epsilon_{t+j}\right) \left(1 + \epsilon_{t+h} \right) . \end{aligned} \tag{6.12} \end{equation} The conditional expectation of the recursion (6.12) does not have a simple form, because of the difficulties in calculating the expectation of $$(1 + \alpha \epsilon_{t+j})(1 + \beta \epsilon_{t+i})^{\phi^{j-i}}(1 + \gamma \epsilon_{t+j})$$. In a simple example of $$h=2$$ and $$m>h$$ the conditional expectation based on (6.12) can be simplified to: $\begin{equation} \mu_{y,t+2} = l_{t} b_{t}^{\phi+\phi^2} \left(1 + \alpha \beta \sigma^2 \right), \tag{6.13} \end{equation}$ introducing the second moment, the variance of the error term $$\sigma^2$$. The case of $$h=3$$ implies the appearance of the third moment, the $$h=4$$ – the fourth etc. This is why there are no closed forms for the conditional moments for the pure multiplicative models with trend and/or seasonality. In some special cases, when smoothing parameters and the variance of error term are all low, it is possible to use approximate formulae for some of the multiplicative models. These are discussed in Chapter 6 of Hyndman et al. (2008). In a special case, when all smoothing parameters are equal to zero or when $$h=1$$, the conditional expectation will coincide with the point forecast from the Tables 4.1 and 4.2 in Section 4.2. But in general, the best thing that can be done in this case is the simulation of possible paths (using the formulae from the tables mentioned above) and then the calculation of mean and variance based on them. Finally, it can be shown for pure multiplicative models that: $\begin{equation} \hat{y}_{t+h} \leq \check{\mu}_{t+h} \leq \mu_{y,t+h} , \tag{6.14} \end{equation}$ where $$\mu_{y,t+h}$$ is the conditional h steps ahead expectation, $$\check{\mu}_{t+h}$$ is the conditional h steps ahead geometric expectation (expectation in logarithms) and $$\hat{y}_{t+h}$$ is the point forecast .

### References

• Hyndman, R.J., Koehler, A.B., Ord, J.K., Snyder, R.D., 2008. Forecasting with Exponential Smoothing. Springer Berlin Heidelberg.
• Sangal, B.P., Biswas, A.K., 1970. The 3-Parameter Lognormal Distribution Applications in Hydrology. Water Resources Research. 6, 505–515. https://doi.org/10.1029/WR006i002p00505
• Svetunkov, I., Boylan, J.E., 2022. Dealing with Positive Data Using Pure Multiplicative ETS Models.