\( \newcommand{\mathbbm}[1]{\boldsymbol{\mathbf{#1}}} \)

6.1 Model formulation

The pure multiplicative ETS implemented in ADAM framework can be formulated using logarithms in the following way: \[\begin{equation} \begin{aligned} \log y_t = & \mathbf{w}^\prime \log(\mathbf{v}_{t-\boldsymbol{l}}) + \log(1 + \epsilon_{t}) \\ \log \mathbf{v}_{t} = & \mathbf{F} \log \mathbf{v}_{t-\boldsymbol{l}} + \log(\mathbf{1}_k + \mathbf{g} \epsilon_t) \end{aligned}, \tag{6.1} \end{equation}\] where \(\mathbf{1}_k\) is the vector of ones, containing \(k\) elements (number of components in the model), \(\log\) is the natural logarithm, applied element-wise to the vectors, and all the other objects correspond to the ones discussed in Section 5.1. An example of a pure multiplicative model is ETS(M,M,M), for which we have the following: \[\begin{equation} \begin{aligned} \mathbf{w} = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}, & \mathbf{F} = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, & \mathbf{g} = \begin{pmatrix} \alpha \\ \beta \\ \gamma \end{pmatrix}, \\ \mathbf{v}_{t} = \begin{pmatrix} l_t \\ b_t \\ s_t \end{pmatrix}, & \boldsymbol{l} = \begin{pmatrix} 1 \\ 1 \\ m \end{pmatrix}, & \mathbf{1}_k = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \end{aligned}. \tag{6.2} \end{equation}\] By inserting these values in equation (6.1), we obtain the model in logarithms: \[\begin{equation} \begin{aligned} \log y_t = & \log l_{t-1} + \log b_{t-1} + \log s_{t-m} + \log \left(1 + \epsilon_{t} \right) \\ \log l_{t} = & \log l_{t-1} + \log b_{t-1} + \log( 1 + \alpha \epsilon_{t}) \\ \log b_{t} = & \log b_{t-1} + \log( 1 + \beta \epsilon_{t}) \\ \log s_{t} = & \log s_{t-m} + \log( 1 + \gamma \epsilon_{t}) \\ \end{aligned} , \tag{6.3} \end{equation}\] which after exponentiation becomes equal to the one, discussed in Section 4.2: \[\begin{equation} \begin{aligned} y_{t} = & l_{t-1} b_{t-1} s_{t-m} (1 + \epsilon_t) \\ l_t = & l_{t-1} b_{t-1} (1 + \alpha \epsilon_t) \\ b_t = & b_{t-1} (1 + \beta \epsilon_t) \\ s_t = & s_{t-m} (1 + \gamma \epsilon_t) \end{aligned}. \tag{6.4} \end{equation}\] This example demonstrates that the model (6.1) underlies other pure multiplicative ETS models. While it can be used for some inference, it has limitations due to the \(\log(\mathbf{1}_k + \mathbf{g} \epsilon_t)\) term, which introduces a non-linear transformation of the smoothing parameters and the error term (this will be discussed in more detail in next sections).

An interesting observation is that the model (6.3) will produce values similar to the model ETS(A,A,A) applied to the data in logarithms, when the values of smoothing parameters are close to zero. This becomes apparent, when recalling the limit: \[\begin{equation} \lim\limits_{x \to 0}\log(1+x) = x . \tag{6.5} \end{equation}\] Based on that, the model will become close to the following one in cases of small values of smoothing parameters: \[\begin{equation} \begin{aligned} \log y_t = & \log l_{t-1} + \log b_{t-1} + \log s_{t-m} + \epsilon_{t} \\ \log l_{t} = & \log l_{t-1} + \log b_{t-1} + \alpha \epsilon_{t} \\ \log b_{t} = & \log b_{t-1} + \beta \epsilon_{t} \\ \log s_{t} = & \log s_{t-m} + \gamma \epsilon_{t} \\ \end{aligned} , \tag{6.6} \end{equation}\] which is the ETS(A,A,A) applied to the data in the logarithms. In many cases, the smoothing parameters will be small enough for the limit (6.5) to hold, so the two models will produce similar forecasts. The main benefit of (6.6) is that it has closed forms for the conditional mean and variance. However, the form (6.6) does not permit mixed components – it only supports the multiplicative ones, making it detached from the other ETS models.