This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

## 6.1 Model formulation

The pure multiplicative ETS implemented in ADAM framework can be formulated using logarithms, similar to how the pure additive ADAM ETS is formulated in (5.4): \begin{equation} \begin{aligned} \log y_t = & \mathbf{w}' \log(\mathbf{v}_{t-\mathbf{l}}) + \log(1 + \epsilon_{t}) \\ \log \mathbf{v}_{t} = & \mathbf{F} \log \mathbf{v}_{t-\mathbf{l}} + \log(\mathbf{1}_k + \mathbf{g} \epsilon_t) \end{aligned}, \tag{6.1} \end{equation} where $$\mathbf{1}_k$$ is the vector of ones, containing $$k$$ elements (number of components in the model), $$\log$$ is the natural logarithm, applied element-wise to the vectors and all the other values have been discussed in the previous sections. An example of a pure multiplicative model is ETS(M,M,M), for which we have the following values: \begin{equation} \begin{aligned} \mathbf{w} = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}, & \mathbf{F} = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, & \mathbf{g} = \begin{pmatrix} \alpha \\ \beta \\ \gamma \end{pmatrix}, \\ \mathbf{v}_{t} = \begin{pmatrix} l_t \\ b_t \\ s_t \end{pmatrix}, & \mathbf{l} = \begin{pmatrix} 1 \\ 1 \\ m \end{pmatrix}, & \mathbf{1}_k = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \end{aligned}. \tag{6.2} \end{equation} By inserting these values in the equation (6.1), we obtain model in logarithms: \begin{equation} \begin{aligned} \log y_t = & \log l_{t-1} + \log b_{t-1} + \log s_{t-m} + \log \left(1 + \epsilon_{t} \right) \\ \log l_{t} = & \log l_{t-1} + \log b_{t-1} + \log( 1 + \alpha \epsilon_{t}) \\ \log b_{t} = & \log b_{t-1} + \log( 1 + \beta \epsilon_{t}) \\ \log s_{t} = & \log s_{t-m} + \log( 1 + \gamma \epsilon_{t}) \\ \end{aligned} , \tag{6.3} \end{equation} which after exponentiation becomes equal to the one, discussed in Section 3.5: \begin{equation} \begin{aligned} y_{t} = & l_{t-1} b_{t-1} s_{t-m} (1 + \epsilon_t) \\ l_t = & l_{t-1} b_{t-1} (1 + \alpha \epsilon_t) \\ b_t = & b_{t-1} (1 + \beta \epsilon_t) \\ s_t = & s_{t-m} (1 + \gamma \epsilon_t) \end{aligned}. \tag{6.4} \end{equation} An interesting observation is that the model (6.3) will produce values similar to the model ETS(A,A,A) applied to the data in logarithms, when the values of smoothing parameters are close to zero. This becomes apparent, when recalling the limit: $\begin{equation} \lim\limits_{x \to 0}\log(1+x) = x . \tag{6.5} \end{equation}$ Based on that, the model will become close to the following one in cases of small values of smoothing parameters: \begin{equation} \begin{aligned} \log y_t = & \log l_{t-1} + \log b_{t-1} + \log s_{t-m} + \epsilon_{t} \\ \log l_{t} = & \log l_{t-1} + \log b_{t-1} + \alpha \epsilon_{t} \\ \log b_{t} = & \log b_{t-1} + \beta \epsilon_{t} \\ \log s_{t} = & \log s_{t-m} + \gamma \epsilon_{t} \\ \end{aligned} , \tag{6.6} \end{equation} which is the ETS(A,A,A) applied to the data in the logarithms. In many cases the smoothing parameters will be small enough for the limit (6.5) to hold, so the two models will produce similar forecasts. The main benefit of (6.6) is that it has closed forms for the conditional mean and variance, so the model (6.6) can be used instead of (6.3), when the smoothing parameters are close to zero and the variance of the error term is small in order to get conditional moments and quantiles of distribution. However, the form (6.6) does not permit mixed components - it only supports the multiplicative ones, which makes it detached from the other ETS models.