The discussion of multicollinearity and how to diagnose it was carried out in Section ??. When it comes to dynamic models, then the situation might differ.
In case of conventional ARIMA model, multicollinearity is inevitable by construct, because of the autocorrelations between actual values. This is why sometimes Heteroskedasticity- and autocorrelation-consistent (HAC) estimators of the covariance matrix (see Section 15.4 of Hanck et al., 2020) of parameters are used instead of the standard ones. They are designed to fix the issue and produce standard errors of parameters that are close to those without the issue.
Finally, in case of state space models, and specifically in case of ETS, the multicollinearity might not cause as serious issues as in the case of regression. For example, it is possible to use all the values of categorical variable, avoiding the trap of dummy variables. The values of categorical variable in this case are considered as changes relative to the baseline. The classical example of this is the seasonal model, for example, ETS(A,A,A), where the seasonal components can be considered as a set of parameters for dummy variables, expanded from the seasonal categorical variable (e.g. months of year variable). If we set \(\gamma=0\), thus making the seasonality deterministic, the ETS can still be estimated even though all values of the variable are used. This becomes apparent with the conventional ETS model, for example, from
forecast package for R:
<- forecast::ets(AirPassengers, "AAA") etsModel # Calculate determination coefficients for seasonal states determ(etsModel$states[,-c(1:2)])
## s1 s2 s3 s4 s5 s6 s7 s8 ## 0.9999992 0.9999992 0.9999991 0.9999991 0.9999992 0.9999992 0.9999992 0.9999991 ## s9 s10 s11 s12 ## 0.9999991 0.9999991 0.9999992 0.9999992
As we see, the states of the model are almost perfectly correlated, but still the model works and does not have issue that the classical linear regression would have.