# Chapter 20 What’s next?

Now that we reach the final pages of this monograph, I want to pause to look back at what we have discussed and what is left untold in the story of the Augmented Dynamic Adaptive Model.

The reason why I did not call this monograph “Forecasting with ETS” or “Forecasting with State Space Models” is because the framework proposed here is not the same as for ETS, and it does not rely on the standard state space model. The combination of ETS, ARIMA, and Regression in one unified model has not been discussed in the literature before this book. But I did not stop at that, I extended it by introducing a variety of distributions: typically, dynamic models rely on Normality, which is not realistic in real life, but ADAM supports several real-valued and several positive distributions. Furthermore, the model that can be applied to both regular and intermittent demand has been developed only by Svetunkov and Boylan (2023a) in the ADAM framework. In addition, ADAM can be extended with multiple seasonal components, making it applicable to high-frequency data. Moreover, ADAM supports not only the location, but the scale model as well, allowing us to model and predict the scale of distribution explicitly (giving it a connection with GARCH models). All of the aspects mentioned above are united in one approach, giving immense flexibility to an analyst.

But what’s next? While we have discussed the important aspects of ADAM, there are still several things left that I did not have time to make work yet.

The first one is the ADAM with Asymmetric Laplace and similar distributions with the non-zero mean (related to this is the estimation of ADAM with non-standard loss functions, such as pinball). While it is possible to use such distributions, in theory, they do not work as intended in dynamic models, because the latter rely on the assumption that the mean of the error term is zero. They work perfectly in the case of the regression model (e.g. see how the `alm()`

from the `greybox`

works with the Asymmetric Laplace) but fail when a model has MA-related terms. This is because the model becomes more adaptive to the changes, pulls to the centre, and cannot maintain the desired quantile. An introduction of such distributions would imply changing the architecture of the state space model (this was briefly discussed in Section 14.7).

Second, model combination and selection literature has seen several bright additions to the field, such as a stellar paper by Kourentzes et al. (2019a) on pooling. This is neither implemented in the `adam()`

function nor discussed in the monograph. Yes, one can use ADAM to do pooling, but it would make sense to introduce it as a part of the ADAM framework.

Third, related to the previous point, is the selection and combinations based on cross-validation techniques (and specifically using rolling origin discussed in Section 2.4). The selected and combined models in this case would differ from the AIC-based ones, hopefully doing better in terms of long-term forecasts.

Fourth, we have not discussed multiple frequency models in the detail that they deserve. For example, we have not mentioned how to diagnose such models when the sample includes thousands of observations. The classical statistical approaches discussed in Section 14 typically fail in this situation, and other tools should be used in this context.

Fifth, `adam()`

has a built-in missing values approach that relies on interpolation and the intermittent state space model (from Section 13). While this already works in practice, there are some aspects of this that are worth discussing that have been left outside this monograph. Most importantly, it is not very clear how to interpolate the components of ADAM in these cases.

Sixth, when discussing pure multiplicative ETS models (Chapter 6), we mentioned that it is possible to apply pure additive ETS to the data in logarithms to achieve similar results to the conventional pure multiplicative model. Akram et al. (2009) have done investigations in this direction, but these models are not yet supported by ADAM. They can be done manually, but their implementation in R functions and detailed explanation of their work can be considered another potential direction for future work.

Finally, while I tried to introduce examples of the application of ADAM, case studies for several contexts would be helpful. This would show how ADAM can be used for decisions in inventory management (we have touched on the topic in Subsection 18.4.4), scheduling, staff allocation, etc.

All of this will hopefully come in the next editions of this monograph.