Since the previous post on “The Creation of ADAM“, I had difficulties finding time to code anything, but I still managed to fix some bugs, implement a couple of features and make changes, important enough to call the next version of package smooth “3.1.0”. Here is what’s new: A new algorithm for ARIMA order selection […]

# The creation of ADAM – next step in statistical forecasting

Good news everyone! The future of statistical forecasting is finally here :). Have you ever struggled with ETS and needed explanatory variables? Have you ever needed to unite ARIMA and ETS? Have you ever needed to deal with all those zeroes in the data? What about the data with multiple seasonalities? All of this and […]

# Accuracy of forecasting methods: Can you tell the difference?

Previously we discussed how to measure accuracy of point forecasts and performance of prediction intervals in different cases. Now we look into the question how to tell the difference between competing forecasting approaches. Let’s imagine the situation, when we have four forecasting methods applied to 100 time series with accuracy measured in terms of RMSSE: […]

# Forecasting method vs forecasting model: what’s difference?

If you work in the field of statistics, analytics, data science or forecasting, then you probably have already noticed that some of the instruments that are used in your field are called “methods”, while the others are called “models”. The issue here is that the people, using these terms, usually know the distinction between them, […]

# Forecasting for the sake of forecasting

You probably have already noticed that we are in a pandemic of COVID-19 these days (breaking news: the UK has just announced a lockdown due to the virus). The number of news, memes and noise on the topic coming from around the world is astonishing! What is also astonishing is the number of posts on […]

# M-competitions, from M4 to M5: reservations and expectations

UPDATE: I have also written a short post on “The role of M competitions in forecasting“, which gives historical perspective and a brief overview of the main findings of the previous competitions. Some of you might have noticed that the guidelines for the M5 competition have finally been released. Those of you who have previously […]

# Multiplicative State-Space Models for Intermittent Time Series, 2019

More than 2 years ago I published on this website a working paper entitled “Multiplicative State-Space Models for Intermittent Time Series“, written by John Boylan and I. This was an early version of the paper, which we submitted to International Journal of Forecasting on 31st January 2017. More than two years later (on 11th July […]

# What about all those zeroes? Measuring performance of models on intermittent demand

In one of the previous posts, we have discussed how to measure the accuracy of forecasting methods on the continuous data. All these MAE, RMSE, MASE, RMSSE, rMAE, rRMSE and other measures can give you an information about the mean or median performance of forecasting methods. We have also discussed how to measure the performance […]

# How confident are you? Assessing the uncertainty in forecasting

Introduction Some people think that the main idea of forecasting is in predicting the future as accurately as possible. I have bad news for them. The main idea of forecasting is in decreasing the uncertainty. Think about it: any event that we want to predict has some systematic components \(\mu_t\), which could potentially be captured […]

# Are you sure you’re precise? Measuring accuracy of point forecasts

Two years ago I have written a post “Naughty APEs and the quest for the holy grail“, where I have discussed why percentage-based error measures (such as MPE, MAPE, sMAPE) are not good for the task of forecasting performance evaluation. However, it seems to me that I did not explain the topic to the full […]