\( \newcommand{\mathbbm}[1]{\boldsymbol{\mathbf{#1}}} \)

7.2 Mixed ADAM ETS models

Hyndman et al. (2008) proposed five classes of ETS models, based on the types of their components:

  1. ANN; AAN; AAdN; ANA; AAA; AAdA;
  2. MNN; MAN; MAdN; MNA; MAA; MAdA;
  3. MNM; MAM; MAdM;
  4. MMN; MMdN; MMM; MMdM;

The idea behind this split is to distinguish the models by their complexity and the availability of analytical expressions for conditional moments. Class 1 models were discussed in Chapter 5.1. They have analytical expressions for conditional mean and variance; they can be applied to any data; they have simple formulae for prediction intervals.

Hyndman et al. (2008) demonstrate that models from Class 2 have closed forms for conditional expectation and variance, with the former corresponding to the point forecasts. However, the conditional distribution from these models is not Gaussian, so there are no formulae for the prediction intervals from these models. Yes, in some cases, Normal distribution might be used as a satisfactory approximation for the real one, but simulations should generally be preferred.

Class 3 models suffer from similar issues, but the situation worsens: there are no analytical solutions for the conditional mean and variance, and there are only approximations to these statistics.

Class 4 models were discussed in Chapter 6.1. They do not have analytical expressions for the moments, their conditional \(h\) steps ahead distributions represent a complex convolution of products of the basic ones, but they are appropriate for the positive data and become more valuable when the level of series is low, as already discussed in Chapter 6.1.

Finally, Class 5 models might have infinite variances, specifically on long horizons and when the data has low values. Indeed, when the level in one of these models becomes close to zero, there is an increased chance of breaking the model due to the appearance of negative values. Consider an example of the ETS(A,A,M) model, which might have a negative trend, leading to negative values, which are then multiplied by the positive seasonal indices. This would lead to unreasonable values of states in the model. That is why in practice, these models should only be used when the level of the series is high. Furthermore, some Class 5 models are very difficult to estimate and are very sensitive to the smoothing parameter values. This mainly applies to the multiplicative trend models.

The ets() function from the forecast package by default supports only Classes 1 – 4 for the reasons explained above, although it is possible to switch on the Class 5 models by setting the parameter restrict to FALSE.

To be fair, any mixed model can potentially break when the level of the series is close to zero. For example, ETS(M,A,N) can have a negative trend, which might lead to the negative level and, as a result, to the multiplication of a pure positive error term by the negative components. Estimating such a model on real data becomes a non-trivial task.

In addition, as discussed above, simulations are typically needed to get prediction intervals for models of Classes 2 – 5 and conditional mean and variance for models of Classes 4 – 5. All of this, in my opinion, means that the more useful classification of ETS models is the following (it was first proposed by Akram et al., 2009):

  1. Pure additive models (Chapter 5.1): ANN; AAN; AAdN; ANA; AAA; AAdA;
  2. Pure multiplicative models (Chapter 6.1): MNN; MMN; MMdN; MNM; MMM; MMdM;
  3. Mixed models with non-multiplicative trend (Section 7.3): MAN; MAdN; MNA; MAA; MAdA; MAM; MAdM; ANM; AAM; AAdM;
  4. Mixed models with multiplicative trend (Section 7.4: MMA; MMdA; AMN; AMdN; AMA; AMdA; AMM; AMdM.

The main idea behind the split to (C) and (D) is that the multiplicative trend makes it almost impossible to derive the formulae for the conditional moments of the distribution. So this class of models can be considered as the most challenging one.

The adam() function supports all 30 ETS models, but you should keep in mind the limitations of some of them discussed in this section. The es() function from smooth is a wrapper of adam() and as a result supports the same set of models as well.


• Akram, M., Hyndman, R.J., Ord, J.K., 2009. Exponential Smoothing and Non-negative Data. Australian & New Zealand Journal of Statistics. 51, 415–432. https://doi.org/10.1111/j.1467-842X.2009.00555.x
• Hyndman, R.J., Koehler, A.B., Ord, J.K., Snyder, R.D., 2008. Forecasting with Exponential Smoothing: The State Space Approach. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-71918-2