**Open Review**. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

## 6.2 The problem with moments in pure multiplicative ETS

The recursion (6.7) obtained in the previous subsection shows how the logarithms of states are influenced by the previous values. While it is possible to calculate the expectation of the logarithm of the variable \(y_{t+h}\), in general this does not allow deriving the expectation of the variable in the original scale. This is because of the convolution of terms \(\log(\mathbf{1}_k + \mathbf{g} \epsilon_{t+j})\) for different \(j\). In order to better understand this issue, we consider this element for the ETS(M,N,N) model: \[\begin{equation} \log(1+\alpha\epsilon_t) = \log(1-\alpha + \alpha(1+\epsilon_t)). \tag{6.10} \end{equation}\] Whatever we assume about the distribution of the variable \((1+\epsilon_t)\), the distribution of (6.10) will be more complicated than needed. For example, if we assume that \((1+\epsilon_t)\sim\mathrm{log}\mathcal{N}(0,\sigma^2)\), then the distribution of (6.10) is something like exp three-parameter log normal distribution (Sangal and Biswas, 1970). The convolution of (6.10) for different \(t\) does not follow a known distribution, so it is not possible to calculate the conditional expectation and variance based on (6.7). Similar issues arrise if we assume any other distribution. The problem is worsened in case of multiplicative trend and / or multiplicative seasonality models, because then the recursion (6.7) contains several errors on the same observation (e.g. \(\log(1+\alpha\epsilon_t)\) and \(\log(1+\beta\epsilon_t)\)).

The only way to derive the conditional expectation and variance for the pure multiplicative models is to use the formalue in the ETS Taxonomy and manually derive the values in the original scale. This works well only for the ETS(M,N,N) model, for which it is possible to take conditional expectation and variance of the recursion (6.9) in order to obtain: \[\begin{equation} \begin{aligned} \mu_{y,t+h} = \mathrm{E}(y_{t+h}|t) = & l_{t} \\ \mathrm{V}(y_{t+h}|t) = & l_{t}^2 \left( \left(1+ \alpha^2 \sigma^2 \right)^{h-1} (1 + \sigma^2) - 1 \right), \end{aligned} \tag{6.11} \end{equation}\] where \(\sigma^2\) is the variance of the error term. For the other models, the conditional moments do not have a general closed forms because of the product of \(\log(1+\alpha\epsilon_t)\), \(\log(1+\beta\epsilon_t)\) and \(\log(1+\gamma\epsilon_t)\). It is still possible to derive the moments for special cases of \(h\), but this is a tedious process. In order to see that, we demonstrate here how the recursion looks for ETS(M,Md,M) model: \[\begin{equation} \begin{aligned} & y_{t+h} = l_{t+h-1} b_{t+h-1}^\phi s_{t+h-m} \left(1 + \epsilon_{t+h} \right) = \\ & l_{t} b_{t}^{\sum_{j=1}^h{\phi^j}} s_{t+h-m\lceil\frac{h}{m}\rceil} \prod_{j=1}^{h-1} \left( (1 + \alpha \epsilon_{t+j}) \prod_{i=1}^{j} (1 + \beta \epsilon_{t+i})^{\phi^{j-i}} \right) \prod_{j=1}^{\lceil\frac{h}{m}\rceil} \left(1 + \gamma \epsilon_{t+j}\right) \left(1 + \epsilon_{t+h} \right) . \end{aligned} \tag{6.12} \end{equation}\] In general the conditional expectation of the recursion (6.12) does not have a simple form, because of the difficulties in calculating the expectation of \((1 + \alpha \epsilon_{t+j})(1 + \beta \epsilon_{t+i})^{\phi^{j-i}}(1 + \gamma \epsilon_{t+j})\). In a simple example of \(h=2\) and \(m>h\) the conditional expectation can be simplified to: \[\begin{equation} \mu_{y,t+2} = l_{t} b_{t}^{\phi+\phi^2} \left(1 + \alpha \beta \sigma^2 \right), \tag{6.13} \end{equation}\] introducing the second moment, the variance of the error term \(\sigma^2\). The case of \(h=3\) implies the appearance of the third moment, the \(h=4\) - the fourth etc. This is the reason, why there are no closed forms for the conditional moments for the pure multiplicative models with trend and / or seasonality. In some special cases, when smoothing parameters and the variance of error term are low, it is possible to use approximate formulae proposed by Hyndman et al. (2008), and in a special case, when all smoothing parameters are equal to zero or when \(h=1\), it is also possible to use the point forecast formulae from the ETS Taxonomy. But in general, the best thing that can be done in this case is the simulation of possible paths from the respective ETS model (using the formulae from the taxonomy) and then calculation of mean and variance based on them. In general, it can be shown that: \[\begin{equation} \hat{y}_{t+h} \leq \check{y}_{t+h} \leq \mu_{y,t+h} , \tag{6.14} \end{equation}\] where \(\mu_{y,t+h}\) is the conditional h steps ahead expectation, \(\check{y}_{t+h}\) is the conditional h steps ahead geometric expectation (expectation in logarithms) and \(\hat{y}_{t+h}\) is the point forecast (Svetunkov and Boylan, 2020a).