We continue our discussion of error measures (if you don’t mind). One other thing that you encounter in forecasting experiments is tables containing several error measures (MASE, RMSSE, MAPE, etc.). Have you seen something like this? Well, this does not make sense, and here is why. The idea of reporting several error measures comes from […]
Theory of forecasting
What does “lower error measure” really mean?
“My amazing forecasting method has a lower MASE than any other method!” You’ve probably seen claims like this on social media or in papers. But have you ever thought about what it really means? Many forecasting experiments come to applying several approaches to a dataset, calculating error measures for each method per time series and […]
The first draft of “Forecasting and Analytics with ADAM”
After working on this for more than a year, I have finally prepared the first draft of my online monograph “Forecasting and Analytics with ADAM“. This is a monograph on the model that unites ETS, ARIMA and regression and introduces advanced features in univariate modelling, including: ETS in a new State Space form; ARIMA in […]
Error Measures Flow Chart
In order to help master students of Lancaster University Managemen Science department, I have developed a flow chart, that acts as a basic guide on what error measures to use in different circumstances. This is not a complete and far from perfect flow chart, and it assumes that the decision maker knows what intermittent demand […]
Accuracy of forecasting methods: Can you tell the difference?
Previously we discussed how to measure accuracy of point forecasts and performance of prediction intervals in different cases. Now we look into the question how to tell the difference between competing forecasting approaches. Let’s imagine the situation, when we have four forecasting methods applied to 100 time series with accuracy measured in terms of RMSSE: […]
Forecasting method vs forecasting model: what’s difference?
If you work in the field of statistics, analytics, data science or forecasting, then you probably have already noticed that some of the instruments that are used in your field are called “methods”, while the others are called “models”. The issue here is that the people, using these terms, usually know the distinction between them, […]
M-competitions, from M4 to M5: reservations and expectations
UPDATE: I have also written a short post on “The role of M competitions in forecasting“, which gives historical perspective and a brief overview of the main findings of the previous competitions. Some of you might have noticed that the guidelines for the M5 competition have finally been released. Those of you who have previously […]
What about all those zeroes? Measuring performance of models on intermittent demand
In one of the previous posts, we have discussed how to measure the accuracy of forecasting methods on the continuous data. All these MAE, RMSE, MASE, RMSSE, rMAE, rRMSE and other measures can give you an information about the mean or median performance of forecasting methods. We have also discussed how to measure the performance […]
How confident are you? Assessing the uncertainty in forecasting
Introduction Some people think that the main idea of forecasting is in predicting the future as accurately as possible. I have bad news for them. The main idea of forecasting is in decreasing the uncertainty. Think about it: any event that we want to predict has some systematic components \(\mu_t\), which could potentially be captured […]
Are you sure you’re precise? Measuring accuracy of point forecasts
Two years ago I have written a post “Naughty APEs and the quest for the holy grail“, where I have discussed why percentage-based error measures (such as MPE, MAPE, sMAPE) are not good for the task of forecasting performance evaluation. However, it seems to me that I did not explain the topic to the full […]