14.1 Model specification: Omitted variables

We start with one of the most critical assumptions for models: the model has not omitted important variables. If it has then the point forecasts might be not as accurate as we would expect and in some serious cases exhibit substantial bias.

This issue is difficult to diagnose because it is typically challenging to identify what is missing if we do not have it in front of us. The best thing one can do is a mental experiment, trying to comprise a list of all theoretically possible variables that would impact the variable of interest. If you manage to come up with such a list and realise that some of the variables are missing, the next step would be to collect the variables themselves or use their proxies. The proxies are variables that are expected to be correlated with the missing variables and can partially substitute them. We would need to add the missing information in the model one way or another.

In some cases, we might be able to diagnose this. For example, with our regression model estimated in the previous section, we have a set of variables not included in the model. A simple thing to do is to see if the residuals of our model are correlated with any of the omitted variables. We can either produce scatterplots or calculate measures of association (see Section 2.2 and Chapter 6 of Svetunkov, 2022a) to see if there are relations in the residuals. I will use assoc() and spread() functions from greybox for this:

# Create a new matrix, removing the variables that are already
# in the model
SeatbeltsWithResiduals <-
  cbind(as.data.frame(residuals(adamSeat01)),
        Seatbelts[,-c(2,5,6)])
colnames(SeatbeltsWithResiduals)[1] <- "residuals"
# Spread plot
greybox::spread(SeatbeltsWithResiduals)
Spread plot for the residuals of model vs not included variables.

Figure 14.2: Spread plot for the residuals of model vs not included variables.

spread() function automatically detects the type of variable and produces based on that scatterplot / boxplot() / tableplot() between them, making the final plot more readable. The plot above tells us that residuals are correlated with DriversKilled, front, rear and law, so some of these variables can be added to the model to improve it. VanKilled might have a weak relation with drivers, but judging by description does not make sense in the model (this is a part of the drivers variable). Also, I would not add DriversKilled, as it seems not to drive the number of deaths and injuries (based on our understanding of the problem), but is just correlated with it for obvious reasons (DriversKilled is included in drivers). The variables front and rear should not be included in the model, because they do not explain injuries and deaths of drivers, they are impacted by similar factors and can be considered as output variables. So, only law can be safely added to the model, because it makes sense. We can also calculate measures of association between variables:

greybox::assoc(SeatbeltsWithResiduals)
## Associations: 
## values:
##               residuals DriversKilled  front   rear VanKilled    law
## residuals        1.0000        0.7826 0.6121 0.4811    0.2751 0.1892
## DriversKilled    0.7826        1.0000 0.7068 0.3534    0.4070 0.3285
## front            0.6121        0.7068 1.0000 0.6202    0.4724 0.5624
## rear             0.4811        0.3534 0.6202 1.0000    0.1218 0.0291
## VanKilled        0.2751        0.4070 0.4724 0.1218    1.0000 0.3949
## law              0.1892        0.3285 0.5624 0.0291    0.3949 1.0000
## 
## p-values:
##               residuals DriversKilled front   rear VanKilled    law
## residuals        0.0000             0     0 0.0000    0.0001 0.0086
## DriversKilled    0.0000             0     0 0.0000    0.0000 0.0000
## front            0.0000             0     0 0.0000    0.0000 0.0000
## rear             0.0000             0     0 0.0000    0.0925 0.6890
## VanKilled        0.0001             0     0 0.0925    0.0000 0.0000
## law              0.0086             0     0 0.6890    0.0000 0.0000
## 
## types:
##               residuals DriversKilled front     rear      VanKilled law   
## residuals     "none"    "pearson"     "pearson" "pearson" "pearson" "mcor"
## DriversKilled "pearson" "none"        "pearson" "pearson" "pearson" "mcor"
## front         "pearson" "pearson"     "none"    "pearson" "pearson" "mcor"
## rear          "pearson" "pearson"     "pearson" "none"    "pearson" "mcor"
## VanKilled     "pearson" "pearson"     "pearson" "pearson" "none"    "mcor"
## law           "mcor"    "mcor"        "mcor"    "mcor"    "mcor"    "none"

Technically speaking, the output of this function tells us that all variables are correlated with residuals and can be considered in the model. This is because p-values are lower than my favourite significance level of 1%, so we can reject the null hypothesis for each of the tests (which is that the respective parameters are equal to zero in the population). I would still prefer not to add DriversKilled, VanKilled, front and rear variables in the model for the reasons explained earlier. We can construct a new model in the following way:

adamSeat02 <- adam(Seatbelts, "NNN",
                   formula=drivers~PetrolPrice+kms+law)

The model now fits the data differently (Figure 14.3):

plot(adamSeat02, 7, main="")
The data and the fitted values for the second model.

Figure 14.3: The data and the fitted values for the second model.

How can we know that we have not omitted any important variables in our new model? Unfortunately, there is no good way of knowing that. In general, we should use judgment to decide whether anything else is needed or not. But given that we deal with time series, we can analyse residuals over time and see if there is any structure left (Figure 14.4):

plot(adamSeat02, 8, main="")
Standardised residuals vs time plot.

Figure 14.4: Standardised residuals vs time plot.

Plot in Figure 14.4 shows that the model has not captured seasonality correctly and that there is still some structure left in the residuals. In order to address this, we will add ETS(A,N,A) element to the model, estimating ETSX instead of just regression:

adamSeat03 <- adam(Seatbelts, "ANA",
                   formula=drivers~PetrolPrice+kms+law)

We can produce similar plots to do model diagnostics (Figue 14.5):

par(mfcol=c(1,2), mar=c(4,4,2,1))
plot(adamSeat03,7:8)
Diagnostic plots for Model 3.

Figure 14.5: Diagnostic plots for Model 3.

In Figure 14.5, we do not see any apparent missing structure in the data and any obvious omitted variables. We can now move to the next steps of diagnostics.

References

• Svetunkov, I., 2022a. Statistics for business analytics. https://openforecast.org/sba/ (version: 31.03.2022)