Another question my students sometimes ask is how to define the sizes for the training and test sets in a forecasting experiment. If you’ve done data mining or machine learning, you’re likely familiar with this concept. But when it comes to forecasting, there are a few nuances. Let’s discuss. First and foremost, in forecasting, the […]
error measures
Straight line is just fine
Look at the image above. Which forecast seems more appropriate: the red straight line (1) or the purple wavy line (2)? Many demand planners might choose option 2, thinking it better captures the ups and downs. But, in many cases, the straight line is just fine. Here’s why. In a previous post on Structure vs. […]
Point Forecast Evaluation: State of the Art
I have summarised several posts on point forecasts evaluation in an article for the Foresight journal. Mike Gilliland, being the Editor-in-Chief of the journal, contributed to the paper a lot, making it read much smoother, but preferred not to be included as the co-author. This article was recently published in the issue 74 for Q3:2024. […]
Don’t forget about bias!
So far, we’ve discussed forecasts evaluation, focusing on the precision of point forecasts. However, there are many other dimensions in the evaluation that can provide useful information about your model’s performance. One of them is bias, which we’ll explore today. Introduction But before that, why should we bother with bias? Research suggests that bias is […]
Best practice for forecasts evaluation for business
One question I received from my LinkedIn followers was how to evaluate forecast accuracy in practice. MAPE is wrong, but it is easy to use. In practice, we want something simple, informative and straightforward, but not all error measures are easy to calculate and interpret. What should we do? Here is my subjective view. Step […]
Avoid using MAPE!
Frankly speaking, I didn’t see the point in discussing MAPE when I wrote recent posts on error measures. However, I’ve received several comments and messages from data scientists and demand planners asking for clarification. So, here it is. TL;DR: Avoid using MAPE! MAPE, or Mean Absolute Percentage Error, is a still-very-popular-in-practice error measure, which is […]
Stop reporting several error measures just for the sake of them!
We continue our discussion of error measures (if you don’t mind). One other thing that you encounter in forecasting experiments is tables containing several error measures (MASE, RMSSE, MAPE, etc.). Have you seen something like this? Well, this does not make sense, and here is why. The idea of reporting several error measures comes from […]
What does “lower error measure” really mean?
“My amazing forecasting method has a lower MASE than any other method!” You’ve probably seen claims like this on social media or in papers. But have you ever thought about what it really means? Many forecasting experiments come to applying several approaches to a dataset, calculating error measures for each method per time series and […]
Error Measures Flow Chart
In order to help master students of Lancaster University Managemen Science department, I have developed a flow chart, that acts as a basic guide on what error measures to use in different circumstances. This is not a complete and far from perfect flow chart, and it assumes that the decision maker knows what intermittent demand […]
Accuracy of forecasting methods: Can you tell the difference?
Previously we discussed how to measure accuracy of point forecasts and performance of prediction intervals in different cases. Now we look into the question how to tell the difference between competing forecasting approaches. Let’s imagine the situation, when we have four forecasting methods applied to 100 time series with accuracy measured in terms of RMSSE: […]