This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

5.5 State space form of ETS

One of the main advantages of the ETS model is its state space form, which gives it the flexibility. We would need to revert to linear algebra in this section in order to understand how any ETS model can be presented in a compact state space form.

Hyndman et al. (2008) use the following general formulation of the model with the first equation called "measurement equation" and the second one "transition equation": \[\begin{equation} \begin{aligned} {y}_{t} = &w(\mathbf{v}_{t-1}) + r(\mathbf{v}_{t-1}) \epsilon_t \\ \mathbf{v}_{t} = &f(\mathbf{v}_{t-1}) + g(\mathbf{v}_{t-1}) \epsilon_t \end{aligned}, \tag{5.21} \end{equation}\]

where \(\mathbf{v}_t\) is the state vector, containing the components of series (level, trend and seasonal), \(w(\cdot)\) is the measurement,\(r(\cdot)\) is the error, \(f(\cdot)\) is the transition and \(g(\cdot)\) is the persistence functions. Depending on the types of components these functions can have different values:

  1. Depending on the types of trend and seasonality \(w(v_{t-1})\) will be equal either to the addition or multiplication of components. The special cases were presented in tables 4.1 and 4.2 in the ETS Taxonomy section. For example, in case of ETS(M,M,M) it is: \(w(v_{t-1}) = l_{t-1} b_{t-1} s_{t-m}\);
  2. If the error is additive, then \(r(v_{t-1})=1\), otherwise (in case of multiplicative error) it is \(r(v_{t-1})=w(v_{t-1})\);
  3. The transition function will produce values depending on the types of trend and seasonality and will correspond to the first parts in the tables 4.1 and 4.2 of the transition equations (dropping the error term). This function records how components interact with each other and how they change from one observation to another (thus the term "transition"). An example is the ETS(M,M,M) model, for which the transition function will produce three values: \(l_{t-1}b_{t-1}\), \(b_{t-1}\) and \(s_{t-m}\) respectively for the level, trend and seasonal components. So, the second equation in (5.21) if we drop the persistence function \(g(\cdot)\) and the error term \(\epsilon_t\) for a moment, in this case will be: \[\begin{equation} \begin{aligned} {l}_{t} = &l_{t-1}b_{t-1} \\ b_t = &b_{t-1} \\ s_t = &s_{t-m} \end{aligned}, \tag{5.22} \end{equation}\]
  4. Finally, the persistence function will differ from one model to another, but in some special cases it can either be: \(g(v_{t-1})=\mathbf{g}\), if the error term is additive and \(g(v_{t-1})=f(v_{t-1})\mathbf{g}\) if it is multiplicative. \(\mathbf{g}\) is the vector of smoothing parameters, called in the ETS context the "persistence vector". An example of persistence function is the ETS(M,M,M) model, for which it is: \(l_{t-1}b_{t-1}\alpha\), \(b_{t-1}\beta\) and \(s_{t-m}\gamma\) respectively for the level, trend and seasonal components. Uniting this with the transition function (3) we get the equation from the table 4.2: \[\begin{equation} \begin{aligned} {l}_{t} = &l_{t-1}b_{t-1} (1+\alpha\epsilon_t)\\ b_t = &b_{t-1} (1+\beta\epsilon_t)\\ s_t = &s_{t-m} (1+\gamma\epsilon_t) \end{aligned}, \tag{5.23} \end{equation}\]

The compact form (5.21) is thus comfortable to work with and underlies all the 30 ETS models discussed in the sections 5.1 and 5.2. Unfortunately, they cannot be used directly for the derivation of conditional values, so they are needed just for the general understanding of ETS.

Several special cases of ETS models and the respective values for the functions will be discussed later in the next chapters in the context of ADAM ETS. The most useful and important cases are pure additive and pure multiplicative ETS models, which then can be formulated via the form that allows deriving conditional expectation and variance.


Hyndman, Rob J., Anne B. Koehler, J. Keith Ord, and Ralph D. Snyder. 2008. Forecasting with Exponential Smoothing. Springer Berlin Heidelberg.