This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

Chapter 7 ADAM: Pure multiplicative ETS

The pure multiplicative ETS implemented in ADAM framework can be formulated using logarithms, similar to how the pure additive ADAM ETS is formulated in (6.1): \[\begin{equation} \begin{aligned} \log y_t = & \mathbf{w}' \log(\mathbf{v}_{t-\mathbf{l}}) + \log(1 + \epsilon_{t}) \\ \log \mathbf{v}_{t} = & \mathbf{F} \log \mathbf{v}_{t-\mathbf{l}} + \log(\mathbf{1}_k + \mathbf{g} \epsilon_t) \end{aligned}, \tag{7.1} \end{equation}\] where \(\mathbf{1}_k\) is the vector of ones, containing \(k\) elements (number of components in the model), \(\log\) is the natural logarithm, applied element-wise to the vectors and all the other values have been discussed in the previous sections. An example of a pure multiplicative model is ETS(M,M,M), for which we have the following values: \[\begin{equation} \begin{aligned} \mathbf{w} = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} & \mathbf{F} = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} & \mathbf{g} = \begin{pmatrix} \alpha \\ \beta \\ \gamma \end{pmatrix} \\ \mathbf{v}_{t} = \begin{pmatrix} l_t \\ b_t \\ s_t \end{pmatrix} & \mathbf{l} = \begin{pmatrix} 1 \\ 1 \\ m \end{pmatrix} & \mathbf{1}_k = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \end{aligned}. \tag{7.2} \end{equation}\] By inserting these values in the equation (7.1), we obtain model in logarithms: \[\begin{equation} \begin{aligned} \log y_t = & \log l_{t-1} + \log b_{t-1} + \log s_{t-m} + \log \left(1 + \epsilon_{t} \right) \\ \log l_{t} = & \log l_{t-1} + \log b_{t-1} + \log( 1 + \alpha \epsilon_{t}) \\ \log b_{t} = & \log b_{t-1} + \log( 1 + \beta \epsilon_{t}) \\ \log s_{t} = & \log s_{t-m} + \log( 1 + \gamma \epsilon_{t}) \\ \end{aligned} , \tag{7.3} \end{equation}\] which after exponentiation becomes equal to the one, discussed in ETS Taxonomy section: \[\begin{equation} \begin{aligned} y_{t} = & l_{t-1} b_{t-1} s_{t-m} (1 + \epsilon_t) \\ l_t = & l_{t-1} b_{t-1} (1 + \alpha \epsilon_t) \\ b_t = & b_{t-1} (1 + \beta \epsilon_t) \\ s_t = & s_{t-m} (1 + \gamma \epsilon_t) \end{aligned}. \tag{7.4} \end{equation}\] An interesting observation is that the model (7.3) will produce values close to the model ETS(A,A,A) applied to the data in logarithms, when the values of smoothing parameters are close to zero. This becomes apparent, when recalling the limit: \[\begin{equation} \lim\limits_{x \to 0}\log(1+x) = x . \end{equation}\] Based on that, the model will become close to the following one in cases of small values of smoothing parameters: \[\begin{equation} \begin{aligned} \log y_t = & \log l_{t-1} + \log b_{t-1} + \log s_{t-m} + \epsilon_{t} \\ \log l_{t} = & \log l_{t-1} + \log b_{t-1} + \alpha \epsilon_{t} \\ \log b_{t} = & \log b_{t-1} + \beta \epsilon_{t} \\ \log s_{t} = & \log s_{t-m} + \gamma \epsilon_{t} \\ \end{aligned} , \tag{7.5} \end{equation}\]

which is the ETS(A,A,A) applied to the data in logarithms. In many cases the smoothing parameters will be small enough for this limit to hold, so the two models will produce similar forecasts.