Everything is random! Your data, your model, its parameter estimates, the forecasts it produces, and even the minimum of the loss function you used. There is no such thing as a “deterministic” forecast – everything is stochastic!
Whenever you work with data, you are working with a sample from a population. In some cases, this is more apparent than in others. In my statistics lectures, I typically give the following example. Consider that we are interested in the average height of students at the university. I could ask every student at the lecture to tell me their height, take the average, and get a number. Is this number random? Yes, indeed. Why? Because if a student who was late for the lecture comes in, I would need to recalculate the average, and the number would change. The average that I get depends on who specifically I have in the sample and how many observations I have. It will vary more in smaller samples and become more stable in larger ones. But this example gives you an idea about the inherent uncertainty of any estimates we deal with.
In time series, the situation is somewhat similar: you are dealing with a sample of values that you have observed up until a specific moment. If, for example, you want to forecast daily admissions in the emergency department of a hospital and apply a model, its forecast will change when a new day comes and a new cohort of patients arrives. This is because your sample changes, and you receive new information about the demand.
So, the parameter estimates of a model you use will change when you get a new observation (e.g., a new record of product sales). Yes, if you estimate the model properly (e.g., using Least Squares), the parameter estimates won’t change substantially, but they will change nonetheless. And this would affect point forecasts and any other statistics produced by your model. Your standard errors, p-values, conditional means, prediction intervals, error measures, model ranking – everything will change with a new observation. In fact, if you do model selection, the structure of the model might change as well. For example, in the case of ETS, you might switch from a model without a trend to one with a trend. So, every time you estimate anything on a sample of data, you should keep in mind that it is random and will change if your sample changes or gets updated.
Why is that important? Because we need to understand this inherent uncertainty, and ideally, we should somehow take it into account. In forecasting, this means you should not draw conclusions based on one application of a model to a dataset. At the very least, you should perform a rolling origin evaluation. As Leonidas Tsaprounis says, “if you don’t roll the origin, you roll the dice”.
So, embrace the uncertainty and learn how to deal with it.
By the way, Kandrika Pritularga and I are holding a course on Demand Forecasting starting on 6th May. There is still time to sign up for it here.