There is no such thing as an “assumption-free approach”

Spherical unicorn in a vacuum

One thing that bothers me when I read posts on social media or papers in peer-reviewed journals is the claim that a proposed approach is “assumption-free.” In forecasting, this is never true. Such an approach is like a spherical unicorn in a vacuum (see image above). Here’s why.

Every model is a simplification of reality, meaning that it captures only a part of it. Simplifying implies that certain aspects of reality are irrelevant and can be ignored. For example, in forecasting, regardless of the approach used, we typically assume that the model captures the structure correctly, i.e. neither omitting important elements nor overfitting the data. Different approaches address this differently: statistical models do that explicitly, while a good ML approach seeks a balance between underfitting and overfitting, often in a non-linear way. When the structure is captured correctly, the forecast reflects the essential part of reality while ignoring small random fluctuations (see a post on structure vs. noise).

Depending on the assumptions we make, we can classify approaches as parametric, semiparametric, or nonparametric.

Parametric approaches assume that the model is correctly specified, its parameters are accurately estimated, and the chosen distribution is appropriate (often the normal one, though others can be used). In this case, we fully rely on the model. A classical example is the construction of conventional prediction intervals: the conditional expectation and variance are calculated and plugged into the normal distribution to derive the necessary quantiles for a specified confidence level. Specifically, in this case we assume that the model is correct, errors are uncorrelated and homoscedastic, and that they follow a normal distribution.

Semiparametric approaches relax some of these assumptions. For example, we might calculate statistics in a more robust manner or drop the assumption of a specific distribution. For example, instead of relying on textbook formulae, we could use in-sample multistep forecast errors to calculate conditional variances. This eliminates the need to assume uncorrelated and homoscedastic errors and allows for some flexibility in the model structure. However, in this example, we still rely on normality.

Nonparametric approaches avoid most of the above assumptions but come with their own hidden ones. For instance, the method proposed by Taylor & Bunn (1999) for constructing prediction intervals fits quantile regressions to in-sample multistep forecast errors. This method does not assume a correct model, well-behaved residuals, or normality. However, it does assume the appropriateness of the chosen quantile regression function (Spoiler: they used polynomial regression, but my experiments suggest that a power function is a more robust alternative).

You might think that nonparametric approaches, with fewer assumptions, should always be preferred. But that’s not necessarily the case. It is “horses for courses”: you should select the approach that best fits your specific situation. For example, when working with small samples, introducing some assumptions might be necessary to get meaningful estimates. A nonparametric approach, while powerful, might require more data than you have available.

Finally, there is no such thing as a “best” method for every situation. As is often the case in forecasting, you need to try different approaches and choose the one that works best. Even then, remember that forecasting always rests on a fundamental assumption: the future will resemble the past. And no fancy method can guarantee that this assumption will hold.

Leave a Reply