We continue our discussion of error measures (if you don’t mind). One other thing that you encounter in forecasting experiments is tables containing several error measures (MASE, RMSSE, MAPE, etc.). Have you seen something like this? Well, this does not make sense, and here is why. The idea of reporting several error measures comes from […]
theory
Multi-step Estimators and Shrinkage Effect in Time Series Models
Authors: Ivan Svetunkov, Nikos Kourentzes, Rebecca Killick Journal: Computational Statistics Abstract: Many modern statistical models are used for both insight and prediction when applied to data. When models are used for prediction one should optimise parameters through a prediction error loss function. Estimation methods based on multiple steps ahead forecast errors have been shown to […]
Accuracy of forecasting methods: Can you tell the difference?
Previously we discussed how to measure accuracy of point forecasts and performance of prediction intervals in different cases. Now we look into the question how to tell the difference between competing forecasting approaches. Let’s imagine the situation, when we have four forecasting methods applied to 100 time series with accuracy measured in terms of RMSSE: […]
Forecasting method vs forecasting model: what’s difference?
If you work in the field of statistics, analytics, data science or forecasting, then you probably have already noticed that some of the instruments that are used in your field are called “methods”, while the others are called “models”. The issue here is that the people, using these terms, usually know the distinction between them, […]
What about all those zeroes? Measuring performance of models on intermittent demand
In one of the previous posts, we have discussed how to measure the accuracy of forecasting methods on the continuous data. All these MAE, RMSE, MASE, RMSSE, rMAE, rRMSE and other measures can give you an information about the mean or median performance of forecasting methods. We have also discussed how to measure the performance […]
Are you sure you’re precise? Measuring accuracy of point forecasts
Two years ago I have written a post “Naughty APEs and the quest for the holy grail“, where I have discussed why percentage-based error measures (such as MPE, MAPE, sMAPE) are not good for the task of forecasting performance evaluation. However, it seems to me that I did not explain the topic to the full […]
Comparing additive and multiplicative regressions using AIC in R
One of the basic things the students are taught in statistics classes is that the comparison of models using information criteria can only be done when the models have the same response variable. This means, for example, that when you have \(\log(y_t)\) and calculate AIC, then this value is not comparable with AIC from a […]
Lecture in HSE, Saint Petersburg
Yesterday I gave a lecture to the master students of Higher School Economics, Saint Petersburg (“Marketing Analytics” programme). This was a very general lecture on “Modern Forecasting”, covering forecasting problems in practice, the solutions to these problems and modern scientific directions in the field. It seems that the lecture was well received and brought up […]
Naughty APEs and the quest for the holy grail
Today I want to tell you a story of naughty APEs and the quest for the holy grail in forecasting. The topic has already been known for a while in academia, but is widely ignored by practitioners. APE stands for Absolute Percentage Error and is one of the simplest error measures, which is supposed to […]
19th IIF Workshop presentation
An IIF workshop “Supply Chain Forecasting for Operations” took place at Lancaster University on 28th and 29th of June. I have given a presentation on a topic that John Boylan and I are currently working on. We suggest a universal statistical model, that allows uniting standard methods of forecasting (for example, for fast moving products) […]