This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

2.3 Theory of distributions

There are several probability distributions that will be helpful in the further chapters of this textbook. Here, I want to briefly discuss those of them that will be used. While this might not seem important at this point, we might refer to this section of the textbook in the next chapters.

2.3.1 Normal distribution

Every statistical textbook has Normal distribution. It is that one famous bell-curved distribution that every statistician likes because it is easy to work with and it is an asymptotic distribution for many other well-behaved distributions in some conditions (so called "Central Limit Theorem"). Here is the probability density function (PDF) of this distribution: \[\begin{equation} f(y_t) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp \left( -\frac{\left(y_t - \mu_t \right)^2}{2 \sigma^2} \right) , \tag{2.3} \end{equation}\] where \(y_t\) is the value of the response variable, \(\mu_t\) is the mean on observation \(t\) and \(\sigma^2\) is the variance of the error term. The maximum likelihood estimate of \(\sigma^2\) is: \[\begin{equation} \hat{\sigma}^2 = \frac{1}{T} \sum_{t=1}^T \left(y_t - \mu_t \right)^2 , \tag{2.4} \end{equation}\]

where \(T\) is the sample size. (2.4) coincides with Mean Squared Error (MSE), discussed in the section 1.

And here how this distribution looks:

What we typically assume in the basic time series models is that a variable is random and follows Normal distribution, meaning that there is a central tendency (in our case - the mean \(mu\)), around which the concentration of values is the highest and there are other potential cases, but their probability of appearance reduces proportionally to the distance from the centre.

The Normal distribution has skewness of zero and kurtosis of 3 (and excess kurtosis, being kurtosis minus three, of 0).

Additionally, if Normal distribution is used for the maximum likelihood estimation of a model, it gives the same parameters as the minimisation of MSE would give.

The log-likelihood based on the Normal distribution is derived by taking the sum of logarithms of the PDF of Normal distribution (2.3): \[\begin{equation} \ell(\mathbf{Y}| \theta, \sigma^2) = - \frac{T}{2} \log \left(2 \pi \sigma^2\right) - \sum_{t=1}^T \frac{\left(y_t - \mu_t \right)^2}{2 \sigma^2} , \tag{2.5} \end{equation}\] where \(\theta\) is the vector of all the estimated parameters in the model, and \(\log\) is the natural logarithm. If one takes the derivative of @(eq:NormalLogLik) with respect to \(\sigma^2\), then the formula (2.4) is obtained. Another useful thing to note is the concentrated log-likelihood, which is obtained by inserting the estimated variance (2.4) in (2.5): \[\begin{equation} \ell(\mathbf{Y}| \theta, \hat{\sigma}^2) = - \frac{T}{2} \log \left( 2 \pi e \hat{\sigma}^2 \right) , \tag{2.6} \end{equation}\]

where \(e\) is the Euler's constant. The concentrated log-likelihood is handy, when estimating the model and calculating information criteria. Sometimes, statisticians drop the \(2 \pi e\) part from the (2.6), because it does not affect any inferences, as long as one works only with Normal distribution (for example, this is what is done in ets() function from forecast package in R). However, it is not advised to do, because this makes the comparison with other distributions impossible.

Normal distribution is available in stats package with dnorm(), qnorm(), pnorm() and rnorm() functions.

2.3.2 Laplace distribution

A more exotic distribution is Laplace, which has some similarities with Normal, but has higher excess. It has the following PDF:

\[\begin{equation} f(y_t) = \frac{1}{2 s} \exp \left( -\frac{\left| y_t - \mu_t \right|}{s} \right) , \tag{2.7} \end{equation}\] where \(s\) is the scale parameter, which, when estimated using likelihood, is equal to the Mean Absolute Error (MAE): \[\begin{equation} \hat{s} = \frac{1}{T} \sum_{t=1}^T \left| y_t - \mu_t \right| . \tag{2.8} \end{equation}\]

It has the following shape:

Similar to the Normal distribution, the skewness of Laplace is equal to zero. However, it has fatter tails - its kurtosis is equal to 6 instead of 3.

The variance of the random variable following Laplace distribution is equal to: \[\begin{equation} \sigma^2 = 2 s^2. \tag{2.9} \end{equation}\]

The dlaplace, qlaplace, plaplace and rlaplace functions from greybox package implement different sides of Laplace distribution in R.

2.3.3 S distribution

This is something relatively new, but not ground braking. I have derived S distribution few years ago, but have never written a paper on that. It has the following density function: \[\begin{equation} f(y_t) = \frac{1}{4 s^2} \exp \left( -\frac{\sqrt{|y_t - \mu_t|}}{s} \right) , \tag{2.10} \end{equation}\] where \(s\) is the scale parameter. If estimated via maximum likelihood, the scale parameter is equal to: \[\begin{equation} \hat{s} = \frac{1}{2T} \sum_{t=1}^T \sqrt{\left| y_t - \mu_t \right|} , \tag{2.11} \end{equation}\]

which corresponds to the minimisation of a half of "Mean Root Absolute Error" or "Half Absolute Moment" (HAM). This is a more exotic type of scale, but the main benefit of this distribution is sever heavy tails - it has kurtosis of 25.2. It might be useful in cases of randomly occurring incidents and extreme values (Black Swans?).

The variance of the random variable following S distribution is equal to: \[\begin{equation} \sigma^2 = 120 s^4. \tag{2.12} \end{equation}\]

The ds, qs, ps and rs from greybox package implement the density, quantile, cumulative and random generation functions.

2.3.4 Generalised Normal distribution

Generalised Normal (\(\mathcal{GN}\)) distribution (as the name says) is a generalisation for Normal distribution, which also includes Laplace and S as special cases (Nadarajah 2005). There are two versions of this distribution: one with a shape and another with a skewness parameter. We are mainly interested in the first one, which has the following PDF: \[\begin{equation} f(y_t) = \frac{\beta}{2 s \Gamma(\beta^{-1})} \exp \left( -\left(\frac{|y_t - \mu_t|}{s}\right)^{\beta} \right), \tag{2.13} \end{equation}\] where \(\beta\) is the shape parameter, and \(s\) is the scale of the distribution, which, when estimated via MLE, is equal to: \[\begin{equation} \hat{s} = \sqrt[^{\beta}]{\frac{\beta}{T} \sum_{t=1}^T\left| y_t - \mu_t \right|^{\beta}}, \tag{2.14} \end{equation}\]

which has MSE, MAE and HAM as special cases, when \(\beta\) is equal to 2, 1 and 0.5 respectively. The parameter \(\beta\) influences the kurtosis directly, it can be calculated for each special case as \(\frac{\Gamma(5/\beta)\Gamma(1/\beta)}{\Gamma(3/\beta)^2}\). The higher \(\beta\) is, the lower the kurtosis is.

The advantage of \(\mathcal{GN}\) distribution is it's flexibility. In theory, it is possible to model extremely rare events with this distribution, if the shape parameter \(\beta\) is fractional and close to zero. Alternatively, when \(\beta \rightarrow \infty\), the distribution converges point-wise to the uniform distribution on \((\mu_t - s, \mu_t + s)\).

Note that the estimation of \(\beta\) is a difficult task, especially, when it is less than 2 - the MLE of it looses properties of consistency and asymptotic normality.

Depending on the value of \(\beta\), the distribution can have different shapes:

Typically, estimating \(\beta\) consistently is a tricky thing to do, especially if it is less than one. Still, it is possible to do that by maximising the likelihood function (2.13).

The variance of the random variable following Generalised Normal distribution is equal to: \[\begin{equation} \sigma^2 = s^2\frac{\Gamma(3/\beta)}{1/\beta}. \tag{2.15} \end{equation}\]

The working functions for the Generalised Normal are implemented in the gnorm package for R, but smooth and greybox packages use their own, faster, implementations.

2.3.5 Asymmetric Laplace distribution

Asymmetric Laplace distribution (\(\mathcal{AL}\)) can be considered as a two Laplace distributions with different parameters \(s\) for left and right sides from the location \(\mu_t\). There are several ways to summarise the probability density function, the neater one relies on the asymmetry parameter \(\alpha\) (Yu and Zhang 2005): \[\begin{equation} f(y_t) = \frac{\alpha (1- \alpha)}{s} \exp \left( -\frac{y_t - \mu_t}{s} (\alpha - I(y_t \leq \mu_t)) \right) , \tag{2.16} \end{equation}\] where \(s\) is the scale parameter, \(\alpha\) is the skewness parameter and \(I(y_t \leq \mu_t)\) is the indicator function, which is equal to one, when the condition is satisfied and to zero otherwise. The scale parameter \(s\) estimated using likelihood is equal to the quantile loss: \[\begin{equation} \hat{s} = \frac{1}{T} \sum_{t=1}^T \left(y_t - \mu_t \right)(\alpha - I(y_t \leq \mu_t)) . \tag{2.17} \end{equation}\]

Thus maximising the likelihood (2.16) is equivalent to estimating the model via the minimisation of \(\alpha\) quantile, making this equivalent to quantile regression approach. So quantile regression models assume indirectly that the error term in the model is \(\epsilon_t \sim \mathcal{AL}(0, s, \alpha)\) (Geraci and Bottai 2007).

Depending on the value of \(\alpha\), the distribution can have different shapes:

Similarly to \(\mathcal{GN}\) distribution, the parameter \(\alpha\) can be estimated during the maximisation of the likelihood, although it makes more sense to set it to some specific values in order to obtain the desired quantile of distribution.

The variance of the random variable following Asymmetric Laplace distribution is equal to: \[\begin{equation} \sigma^2 = s^2\frac{(1-\alpha)^2+\alpha^2}{\alpha^2(1-\alpha)^2}. \tag{2.18} \end{equation}\]

Functions dalaplace, qalaplace, palaplace and ralaplace from greybox package implement the Asymmetric Laplace distribution.

2.3.6 Log Normal, Log Laplace, Log S and Log GN distributions

In addition, it is possible to derive the log-versions of the Normal, , , and distributions. The main differences between the original and the log-versions of density functions for these distributions can be summarised as follows: \[\begin{equation} f_{log}(\log(y_t)) = \frac{1}{y_t} f(\log y_t). \tag{2.19} \end{equation}\]

They are defined for positive values only and will have different right tail, depending on the location, scale and shape parameters. \(\exp(\mu_t)\) in this case represents the geometric mean (and median) of distribution rather than the arithmetic one. The conditional expectation in these distributions is typically higher than \(\exp(\mu_t)\) and depends on the value of the scale parameter. It is known for log\(\mathcal{N}\), but does not have a simple form for the other distributions.

2.3.7 Inverse Gaussian distribution

An exotic distribution that will be useful for what comes in this textbook is the Inverse Gaussian (\(\mathcal{IG}\)), which is parameterised using mean value \(\mu_t\) and either the dispersion parameter \(s\) or the scale \(\lambda\) and is defined for positive values only. This distribution is useful because it is scalable and has some similarities with the Normal one. In our case, the important property is the following: \[\begin{equation} \text{if } (1+\epsilon_t) \sim \mathcal{IG}(1, s) \text{, then } y_t = \mu_t \times (1+\epsilon_t) \sim \mathcal{IG}\left(\mu_t, \frac{s}{\mu_t} \right), \tag{2.20} \end{equation}\]

implying that the dispersion of the model changes together with the expectation. The PDF of the distribution of \(1+\epsilon_t\) is:

\[\begin{equation} f(\epsilon_t) = \frac{1}{\sqrt{2 \pi s (1+\epsilon_t)^3}} \exp \left( -\frac{\epsilon_t^2}{2 s (1+\epsilon_t)} \right) , \tag{2.21} \end{equation}\] where the dispersion parameter can be estimated via maximising the likelihood and is calculated using: \[\begin{equation} \hat{s} = \frac{1}{T} \sum_{t=1}^T \frac{\left(\epsilon_t \right)^2}{1+\epsilon_t} . \tag{2.22} \end{equation}\]

This distribution becomes very useful for multiplicative models, where it is expected that the data can only be positive.

Here is how the PDF of \(\mathcal{IG}(1,s)\) looks for different values of the dispersion \(s\):

statmod package implements density, quantile, cumulative and random number generator functions for the \(\mathcal{IG}\).

References

Geraci, Marco, and Matteo Bottai. 2007. “Quantile regression for longitudinal data using the asymmetric Laplace distribution.” Biostatistics 8 (1): 140–54. doi:10.1093/biostatistics/kxj039.

Nadarajah, Saralees. 2005. “A generalized normal distribution.” Journal of Applied Statistics 32 (7): 685–94. doi:10.1080/02664760500079464.

Yu, Keming, and Jin Zhang. 2005. “A three-parameter asymmetric laplace distribution and its extension.” Communications in Statistics - Theory and Methods 34 (9-10): 1867–79. doi:10.1080/03610920500199018.