This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

7.2 Mixed ADAM ETS models

Hyndman et al. (2008) proposed five classes of ETS models, based on the types of their components:

  1. ANN; AAN; AAdN; ANA; AAA; AAdA;
  2. MNN; MAN; MAdN; MNA; MAA; MAdA;
  3. MNM; MAM; MAdM;
  4. MMN; MMdN; MMM; MMdM;

The idea behind this split is to distinguish the models by their complexity and by the availability of analytical expressions for conditional moments. Class 1 models have been discuss in Chapter 5.1. They have analytical expressions for conditional mean and variance, they can be applied to any type of data and simple formulae for prediction intervals.

Hyndman et al. (2008) demonstrate that models from the Class 2 have closed forms for conditional expectation and variance, with the former corresponding to the point forecasts. However, the conditional distribution from these models is not Gaussian, so there are no formulae for the prediction intervals from these models. Yes, in some cases Normal distribution might be used as a fine approximation for the real one, but in general simulations should be preferred.

Class 3 models suffer from similar issues, but the situation worsens: there are no analytical solutions for the conditional mean and variance, and there are only approximations to these statistics.

Class 4 models have been discussed in the Chapter 6.1. They do not have analytical expressions for the moments, their conditional h steps ahead distributions represent a complex convolution of products of the basic distribution, but they are appropriate for the positive data and become more useful, when the level of series is low, as already discussed in the Chapter 6.1.

Finally, Class 5 models might have infinite variances, specifically on long horizons and when the data has low values. Indeed, when the level in one of these models becomes close to zero, there is an increased chance of breaking the model due to the appearance of negative values (consider an example of ETS(A,A,M) model, which might have a negative trend, leading to negative values, which are then multiplied by the positive seasonal indices). That is why in practice these models should only be used, when the level of the series is high. Furthermore, some of the models from the Class 5 are very difficult to estimate and are very sensitive to the smoothing parameters values.

The ets() function from forecast package supports only the Classes 1 - 4 for the reasons explained above.

To be fair, any mixed model can potentially break, when the level of series is close to zero. For example, ETS(M,A,N) can have the negative trend, which might lead to the negative level and as a result to the multiplication of pure positive error term by the negative components. Estimating such a model on real data becomes a non-trivial task.

In addition, as discussed above, simulations are typically needed in order to get prediction interval for models of Classes 2 - 5 and conditional mean and variance for models for Classes 4 - 5. All of this in my opinion means that the more useful classification of ETS models is the following (this classification was first proposed by Akram et al., 2009):

A. Pure additive models (Chapter 5.1): ANN; AAN; AAdN; ANA; AAA; AAdA; B. Pure multiplicative models (Chapter 6.1): MNN; MMN; MMdN; MNM; MMM; MMdM; C. Mixed models with non-multiplicative trend (Section 7.3): MAN; MAdN; MNA; MAA; MAdA; MAM; MAdM; ANM; AAM; AAdM; D. Mixed models with multiplicative trend (Section 7.4: MMA; MMdA; AMN; AMdN; AMA; AMdA; AMM; AMdM;

The main idea behind the split to (C) and (D) is because the presence of multiplicative trend makes it almost impossible to derive the formulae for the conditional moments of distribution. So this class of models can be considered as the most challenging one.

adam() function supports all 30 ETS models, but you should keep in mind the limitations of some of them discussed in this section.


• Akram, M., Hyndman, R.J., Ord, J.K., 2009. Exponential Smoothing and Non-negative Data. Australian & New Zealand Journal of Statistics. 51, 415–432.
• Hyndman, R.J., Koehler, A.B., Ord, J.K., Snyder, R.D., 2008. Forecasting with Exponential Smoothing. Springer Berlin Heidelberg.