Not every pattern that appears seasonal is genuinely seasonal. This means you don’t always require a seasonal model when you see repetitive patterns with fixed periodicity. How come? First things first, in forecasting, the term “seasonality” refers to any natural pattern repeating with some periodicity. For example, if you work in a hospital with A&E […]
Don’t forget about bias!
So far, we’ve discussed forecasts evaluation, focusing on the precision of point forecasts. However, there are many other dimensions in the evaluation that can provide useful information about your model’s performance. One of them is bias, which we’ll explore today. Introduction But before that, why should we bother with bias? Research suggests that bias is […]
What is “forecasting”?
What is “forecasting”? Many people will have a ready answer to this question, but I would argue that not many have spent enough time thinking about it. Should we spend a couple of minutes of our time today to do that? Straight to the point: my answer to the question comes to the following definition: […]
Best practice for forecasts evaluation for business
One question I received from my LinkedIn followers was how to evaluate forecast accuracy in practice. MAPE is wrong, but it is easy to use. In practice, we want something simple, informative and straightforward, but not all error measures are easy to calculate and interpret. What should we do? Here is my subjective view. Step […]
Avoid using MAPE!
Frankly speaking, I didn’t see the point in discussing MAPE when I wrote recent posts on error measures. However, I’ve received several comments and messages from data scientists and demand planners asking for clarification. So, here it is. TL;DR: Avoid using MAPE! MAPE, or Mean Absolute Percentage Error, is a still-very-popular-in-practice error measure, which is […]
Detecting patterns in white noise
Back in 2015, when I was working on my paper on Complex Exponential Smoothing, I conducted a simple simulation experiment to check how ARIMA and ETS select components/orders in time series. And I found something interesting… One of the important steps in forecasting with statistical models is identifying the existing structure. In the case of […]
Stop reporting several error measures just for the sake of them!
We continue our discussion of error measures (if you don’t mind). One other thing that you encounter in forecasting experiments is tables containing several error measures (MASE, RMSSE, MAPE, etc.). Have you seen something like this? Well, this does not make sense, and here is why. The idea of reporting several error measures comes from […]
What does “lower error measure” really mean?
“My amazing forecasting method has a lower MASE than any other method!” You’ve probably seen claims like this on social media or in papers. But have you ever thought about what it really means? Many forecasting experiments come to applying several approaches to a dataset, calculating error measures for each method per time series and […]
What’s wrong with ARIMA?
Have you heard of ARIMA? It is one of the benchmark forecasting models used in different academic experiments, although it is not always popular among practitioners. But why? What’s wrong with ARIMA? ARIMA has been a standard forecasting model in statistics for ages. It gained popularity with the famous Box & Jenkins (1970) book and […]
The role of M competitions in forecasting
If you are interested in forecasting, you might have heard of M-competitions. They played a pivotal role in developing forecasting principles, yet also sparked controversy. In this short post, I’ll briefly explain their historical significance and discuss their main findings. Before M-competitions, only few papers properly evaluated forecasting approaches. Statisticians assumed that if a model […]