<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Archives adam() - Open Forecasting</title>
	<atom:link href="https://openforecast.org/category/r-en/smooth/adam/feed/" rel="self" type="application/rss+xml" />
	<link>https://openforecast.org/category/r-en/smooth/adam/</link>
	<description>How to look into the future</description>
	<lastBuildDate>Wed, 30 Oct 2024 10:45:59 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Methods for the smooth functions in R</title>
		<link>https://openforecast.org/2024/10/10/methods-for-the-smooth-functions-in-r/</link>
					<comments>https://openforecast.org/2024/10/10/methods-for-the-smooth-functions-in-r/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Thu, 10 Oct 2024 13:46:22 +0000</pubDate>
				<category><![CDATA[adam()]]></category>
		<category><![CDATA[Applied forecasting]]></category>
		<category><![CDATA[Package smooth for R]]></category>
		<category><![CDATA[R]]></category>
		<category><![CDATA[ADAM]]></category>
		<category><![CDATA[smooth]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=3685</guid>

					<description><![CDATA[<p>I have been asked recently by a colleague of mine how to extract the variance from a model estimated using adam() function from the smooth package in R. The problem was that that person started reading the source code of the forecast.adam() and got lost between the lines (this happens to me as well sometimes). [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2024/10/10/methods-for-the-smooth-functions-in-r/">Methods for the smooth functions in R</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>I have been asked recently by a colleague of mine how to extract the variance from a model estimated using <code>adam()</code> function from the <code>smooth</code> package in R. The problem was that that person started reading the source code of the <code>forecast.adam()</code> and got lost between the lines (this happens to me as well sometimes). Well, there is an easier solution, and in this post I want to summarise several methods that I have implemented in the <code>smooth</code> package for forecasting functions. In this post I will focus on the <code>adam()</code> function, although all of them work for <code>es()</code> and <code>msarima()</code> as well, and some of them work for other functions (at least as for now, for smooth v4.1.0). Also, some of them are mentioned in the <a href="https://openforecast.org/adam/cheatSheet.html">Cheat sheet for adam() function</a> of my monograph (available <a href="https://openforecast.org/adam/">online</a>).</p>
<p><!-- Before I start, I have a <strong>short announcement</strong>. Kandrika Pritularga and I are planning to host the first online course on "Demand Forecasting with R" in November 2024. There are still some places left, so you can register via <a href="https://online-payments.lancaster-university.co.uk/product-catalogue/courses/lancaster-university-management-school-lums/centre-for-marketing-analytics-forecasting-cmaf/demand-forecasting-with-r">the Lancaster University shop</a>. You can read about the course <a href="https://www.lancaster.ac.uk/centre-for-marketing-analytics-and-forecasting/grow-with-us/demand-forecasting-with-r/">here</a>. --></p>
<h3>The main methods</h3>
<p>The <code>adam</code> class supports several methods that are used in other packages in R (for example, for the <code>lm</code> class). Here are they:</p>
<ul>
<li><code>forecast()</code> and <code>predict()</code> &#8211; produce forecasts from the model. The former is preferred, the latter has a bit of limited functionality. See documentation to see what forecasts can be generated. This was also discussed in <a href="https://openforecast.org/adam/ADAMForecasting.html">Chapter 18</a> of my monograph.</li>
<li><code>fitted()</code> &#8211; extracts the fitted values from the estimated object;</li>
<li><code>residuals()</code> &#8211; extracts the residuals of the model. These are values of \(e_t\), which differ depending on the error type of the model (see <a href="https://openforecast.org/adam/non-mle-based-loss-functions.html">discussion here</a>);</li>
<li><code>rstandard()</code> &#8211; returns standardised residuals, i.e. residuals divided by their standard deviation;</li>
<li><code>rstudent()</code> &#8211; studentised residuals, i.e. residuals that are divided by their standard deviation, dropping the impact of each specific observation on it. This helps in case of influential outliers.</li>
</ul>
<p>An additional method was introduced in the <code>greybox</code> package, called <code>actuals()</code>, which allows extracting the actual values of the response variable. Another useful method is <code>accuracy()</code>, which returns a set of error measures using the <code>measures()</code> function of the <code>greybox</code> package for the provided model and the holdout values.</p>
<p>All the methods above can be used for model diagnostics and for forecasting (the main purpose of the package). Furthermore, the <code>adam</code> class supports several functions for working with coefficients of models, similar to how it is done in case of <code>lm</code>:</p>
<ul>
<li><code>coef()</code> or <code>coefficient()</code> &#8211; extracts all the estimated coefficients in the model;</li>
<li><code>vcov()</code> &#8211; extracts the covariance matrix of parameters. This can be done either using Fisher Information or via a bootstrap (<code>bootstrap=TRUE</code>).  In the latter case, the <code>coefbootstrap()</code> method is used to create bootstrapped time series, reapply the model and extract estimates of parameters;</li>
<li><code>confint()</code> &#8211; returns the confidence intervals for the estimated parameter. Relies on <code>vcov()</code> and the assumption of normality (<a href="https://openforecast.org/adam/ADAMUncertaintyConfidenceInterval.html">CLT</a>);</li>
<li><code>summary()</code> &#8211; returns the output of the model, containing the table with estimated parameters, their standard errors and confidence intervals.</li>
</ul>
<p>Here is an example of an output from an ADAM ETS estimated using <code>adam()</code>:</p>
<pre class="decode">adamETSBJ <- adam(BJsales, h=10, holdout=TRUE)
summary(adamETSBJ, level=0.99)</pre>
<p>The first line above estimates and selects the most appropriate ETS for the data, while the second one will create a summary with 99% confidence intervals, which should look like this:</p>
<pre>Model estimated using adam() function: ETS(AAdN)
Response variable: BJsales
Distribution used in the estimation: Normal
Loss function type: likelihood; Loss function value: 241.1634
Coefficients:
      Estimate Std. Error Lower 0.5% Upper 99.5%  
alpha   0.8251     0.1975     0.3089      1.0000 *
beta    0.4780     0.3979     0.0000      0.8251  
phi     0.7823     0.2388     0.1584      1.0000 *
level 199.9314     3.6753   190.3279    209.5236 *
trend   0.2178     2.8416    -7.2073      7.6340  

Error standard deviation: 1.3848
Sample size: 140
Number of estimated parameters: 6
Number of degrees of freedom: 134
Information criteria:
     AIC     AICc      BIC     BICc 
494.3268 494.9584 511.9767 513.5372</pre>
<p>How to read this output, is discussed in <a href="https://openforecast.org/adam/ADAMUncertaintyConfidenceInterval.html">Section 16.3</a>.</p>
<h3>Multistep forecast errors</h3>
<p>There are two methods that can be used as additional analytical tools for the estimated model. Their generics are implemented in the <code>smooth</code> package itself:</p>
<ol>
<li><code>rmultistep()</code> - extracts the multiple steps ahead in-sample forecast errors for the specified horizon. This means that the model produces the forecast of length <code>h</code> for every observation starting from the very first one, till the last one and then calculates forecast errors based on it. This is used in case of semiparametric and nonparametric prediction intervals, but can also be used for diagnostics (see, for example, <a href="https://openforecast.org/adam/diagnosticsResidualsIIDExpectation.html#diagnosticsResidualsIIDExpectationMultiple">Subsection 14.7.3</a>);</li>
<li><code>multicov()</code> - returns the covariance matrix of the h steps ahead forecast error. The diagonal of this matrix corresponds to the h steps ahead variance conditional on the in-sample information.</li>
</ol>
<p>For the same model that we used in the previous section, we can extract and plot the multistep errors:</p>
<pre class="decode">rmultistep(adamETSBJ, h=10) |> boxplot()
abline(h=0, col="red2", lwd=2)</pre>
<p>which will result in:<br />
<div id="attachment_3689" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamBJETSMulti.png&amp;nocache=1"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-3689" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamBJETSMulti-300x210.png&amp;nocache=1" alt="Distributions of the multistep forecast errors" width="300" height="210" class="size-medium wp-image-3689" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamBJETSMulti-300x210.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamBJETSMulti-768x538.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamBJETSMulti.png&amp;nocache=1 1000w" sizes="(max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-3689" class="wp-caption-text">Distributions of the multistep forecast errors</p></div>
<p>The image above shows that the model tend to under shoot the actual values in-sample (because the boxplots tend to lie slightly above the zero line). This might cause a bias in the final forecasts.</p>
<p>The covariance matrix of the multistep forecast error looks like this in our case:</p>
<pre class="decode">multicov(adamETSBJ, h=10) |> round(3)</pre>
<pre>       h1    h2     h3     h4     h5     h6     h7     h8     h9    h10
h1  1.918 2.299  2.860  3.299  3.643  3.911  4.121  4.286  4.414  4.515
h2  2.299 4.675  5.729  6.817  7.667  8.333  8.853  9.260  9.579  9.828
h3  2.860 5.729  8.942 10.651 12.250 13.501 14.480 15.246 15.845 16.314
h4  3.299 6.817 10.651 14.618 16.918 18.979 20.592 21.854 22.841 23.613
h5  3.643 7.667 12.250 16.918 21.538 24.348 26.808 28.733 30.239 31.417
h6  3.911 8.333 13.501 18.979 24.348 29.515 32.753 35.549 37.737 39.448
h7  4.121 8.853 14.480 20.592 26.808 32.753 38.372 41.964 45.036 47.440
h8  4.286 9.260 15.246 21.854 28.733 35.549 41.964 47.950 51.830 55.127
h9  4.414 9.579 15.845 22.841 30.239 37.737 45.036 51.830 58.112 62.223
h10 4.515 9.828 16.314 23.613 31.417 39.448 47.440 55.127 62.223 68.742</pre>
<p>This is not useful on its own, but can be used for some further derivations.</p>
<p>Note that the returned values by both <code>rmultistep()</code> and <code>multicov()</code> depend on the model's error type (see <a href="https://openforecast.org/adam/non-mle-based-loss-functions.html">Section 11.2</a> for clarification).</p>
<h3>Model diagnostics</h3>
<p>The conventional <code>plot()</code> method applied to a model estimated using <code>adam()</code> can produce a variety of images for the visual model diagnostics. This is controlled by the <code>which</code> parameter (overall, 16 options). The documentation of the <code>plot.smooth()</code> contains the exhaustive list of options and Chapter 14 of the monograph shows how they can be used for model diagnostics. Here I only list several main ones:</p>
<ul>
<li><code>plot(ourModel, which=1)</code> - actuals vs fitted values. Can be used for general diagnostics of the model. Ideally, all points should lie around the diagonal line;</li>
<li><code>plot(ourModel, which=2)</code> - standardised residuals vs fitted values. Useful for detecting potential outliers. Also accepts the <code>level</code> parameter, which regulates the width of the confidence bounds.</li>
<li><code>plot(ourModel, which=4)</code> - absolute residuals vs fitted, which can be used for detecting heteroscedasticity of the residuals;</li>
<li><code>plot(ourModel, which=6)</code> - QQ plot for the analysis of the distribution of the residuals. The specific figure changes for different distribution assumed in the model (see <a href="https://openforecast.org/adam/ADAMETSEstimationLikelihood.html">Section 11.1</a> for the supported ones);</li>
<li><code>plot(ourModel, which=7)</code> - actuals, fitted values and point forecasts over time. Useful for understanding how the model fits the data and what point forecast it produces;</li>
<li><code>plot(ourModel, which=c(10,11))</code> - ACF and PACF of the residuals of the model to detect potentially missing AR/MA elements;</li>
<li><code>plot(ourModel, which=12)</code> - plot of the components of the model. In case of ETS, will show the time series decomposition based on it.</li>
</ul>
<p>And here are four default plots for the model that we estimated earlier:</p>
<pre class="decode">par(mfcol=c(2,2))
plot(adamETSBJ)</pre>
<div id="attachment_3695" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJPlots.png&amp;nocache=1"><img decoding="async" aria-describedby="caption-attachment-3695" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJPlots-300x210.png&amp;nocache=1" alt="Diagnostic plots for the estimated model" width="300" height="210" class="size-medium wp-image-3695" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJPlots-300x210.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJPlots-768x538.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJPlots.png&amp;nocache=1 1000w" sizes="(max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-3695" class="wp-caption-text">Diagnostic plots for the estimated model</p></div>
<p>Based on the plot above, we can conclude that the model fits the data fine, does not have apparent heteroscedasticity, but has several potential outliers, which can be explored to improve it. The outliers detection is done via the <code>outlierdummy()</code> method, the generic of which is implemented in the <code>greybox</code> package.</p>
<h3>Other useful methods</h3>
<p>There are many methods that are used by functions to extract some information about the model. I sometimes use them to simplify my coding routine. Here they are:</p>
<ul>
<li><code>lags()</code> - returns lags of the model. Especially useful if you fit a multiple seasonal model;</li>
<li><code>orders()</code> - the vector of orders of the model. Mainly useful in case of ARIMA, which can have multiple seasonalities and p,d,q,P,D,Q orders;</li>
<li><code>modelType()</code> - the type of the model. In case with the one fitted above will return "AAdN". Can be useful to easily refit the similar model on the new data;</li>
<li><code>modelName()</code> - the name of the model. In case of the one we fitted above will return "ETS(AAdN)";</li>
<li><code>nobs()</code>, <code>nparam()</code>, <code>nvariate()</code> - number of in-sample observations, number of all estimated parameters and number of time series used in the model respectively. The latter one is developed mainly for the multivariate models, such as VAR and VETS (e.g. <code>legion</code> package in R);</li>
<li><code>logLik()</code> - extracts log-Likelihood of the model;</li>
<li><code>AIC()</code>, <code>AICc()</code>, <code>BIC()</code>, <code>BICc()</code> - extract respective information criteria;</li>
<li><code>sigma()</code> - returns the standard error of the residuals.</li>
</ul>
<h3>More specialised methods</h3>
<p>One of the methods that can be useful for scenarios and artificial data generation is <code>simulate()</code>. It will take the structure and parameters of the estimated model and use them to generate time series, similar to the original one. This is discussed in <a href="https://openforecast.org/adam/ADAMUncertaintySimulation.html">Section 16.1</a> of the ADAM monograph.</p>
<p>Furthermore, <code>smooth</code> implements the scale model, discussed in <a href="https://openforecast.org/adam/ADAMscaleModel.html">Chapter 17</a>, which allows modelling time varying scale of distribution. This is done via the <code>sm()</code> method (generic introduced in the <code>greybox</code> package), the output of which can then be merged with the original model via the <code>implant()</code> method.</p>
<p>For the same model that we used earlier, the scale model can be estimated this way:</p>
<pre class="decode">adamETSBJSM <- sm(adamETSBJ)</pre>
<p>This is how it looks:</p>
<pre class="decode">plot(adamETSBJSM, 7)</pre>
<div id="attachment_3707" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJSM.png&amp;nocache=1"><img decoding="async" aria-describedby="caption-attachment-3707" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJSM-300x210.png&amp;nocache=1" alt="Scale model for the ADAM ETS" width="300" height="210" class="size-medium wp-image-3707" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJSM-300x210.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJSM-768x538.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJSM.png&amp;nocache=1 1000w" sizes="(max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-3707" class="wp-caption-text">Scale model for the ADAM ETS</p></div>
<p>In the plot above, the y-axis contains the squared residuals. The fact that the holdout sample contains a large increase in the error is expected, because that part corresponds to the forecast errors rather than residuals. It is added to the plot for completeness.</p>
<p>To use the scale model in forecasting, we should implant it in the location one, which can be done using the following command:</p>
<pre class="decode">adamETSBJFull <- implant(location=adamETSBJ, scale=adamETSBJSM)</pre>
<p>The resulting model will have fewer degrees of freedom (because the scale model estimated two parameters), but its prediction interval will now take the scale model into account, and will differ from the original. We will now take into account the time varying variance based on the more recent information instead of the averaged one across the whole time series. In our case, the forecasted variance is lower than the one we would obtain in case of the adamETSBJ model. This leads to the narrower prediction interval (you can produce them for both models and compare):</p>
<pre class="decode">forecast(adamETSBJFull, h=10, interval="prediction") |> plot()</pre>
<div id="attachment_3708" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJFull.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3708" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJFull-300x210.png&amp;nocache=1" alt="Forecast from the full ADAM, containing both location and scale parts" width="300" height="210" class="size-medium wp-image-3708" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJFull-300x210.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJFull-768x538.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/10/adamETSBJFull.png&amp;nocache=1 1000w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-3708" class="wp-caption-text">Forecast from the full ADAM, containing both location and scale parts</p></div>
<h3>Conclusions</h3>
<p>The methods discussed above give a bit of flexibility of how to model things and what tools to use. I hope this makes your life easier and that you won't need to spend time reading the source code, but instead can focus on <a href="https://openforecast.org/adam/">forecasting and analytics with ADAM</a>.</p>
<p>Message <a href="https://openforecast.org/2024/10/10/methods-for-the-smooth-functions-in-r/">Methods for the smooth functions in R</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2024/10/10/methods-for-the-smooth-functions-in-r/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>iETS: State space model for intermittent demand forecasting</title>
		<link>https://openforecast.org/2023/09/08/iets-state-space-model-for-intermittent-demand-forecasting/</link>
					<comments>https://openforecast.org/2023/09/08/iets-state-space-model-for-intermittent-demand-forecasting/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Fri, 08 Sep 2023 09:30:40 +0000</pubDate>
				<category><![CDATA[adam()]]></category>
		<category><![CDATA[ETS]]></category>
		<category><![CDATA[Package smooth for R]]></category>
		<category><![CDATA[Papers]]></category>
		<category><![CDATA[R]]></category>
		<category><![CDATA[ADAM]]></category>
		<category><![CDATA[intermittent demand]]></category>
		<category><![CDATA[papers]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=3200</guid>

					<description><![CDATA[<p>Authors: Ivan Svetunkov, John E. Boylan Journal: International Journal of Production Economics Abstract: Inventory decisions relating to items that are demanded intermittently are particularly challenging. Decisions relating to termination of sales of product often rely on point estimates of the mean demand, whereas replenishment decisions depend on quantiles from interval estimates. It is in this [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2023/09/08/iets-state-space-model-for-intermittent-demand-forecasting/">iETS: State space model for intermittent demand forecasting</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong>Authors</strong>: Ivan Svetunkov, <a href="/en/2023/07/21/john-e-boylan/">John E. Boylan</a></p>
<p><strong>Journal</strong>: <a href="https://www.sciencedirect.com/journal/international-journal-of-production-economics">International Journal of Production Economics</a></p>
<p><strong>Abstract</strong>: Inventory decisions relating to items that are demanded intermittently are particularly challenging. Decisions relating to termination of sales of product often rely on point estimates of the mean demand, whereas replenishment decisions depend on quantiles from interval estimates. It is in this context that modelling intermittent demand becomes an important task. In previous research, this has been addressed by generalised linear models or integer-valued ARMA models, while the development of models in state space framework has had mixed success. In this paper, we propose a general state space model that takes intermittence of data into account, extending the taxonomy of single source of error state space models. We show that this model has a connection with conventional non-intermittent state space models used in inventory planning. Certain forms of it may be estimated by Croston’s and Teunter-Syntetos-Babai (TSB) forecasting methods. We discuss properties of the proposed models and show how a selection can be made between them in the proposed framework. We then conduct a simulation experiment, empirically evaluating the inventory implications.</p>
<p><strong>DOI</strong>: <a href="https://doi.org/10.1016/j.ijpe.2023.109013">10.1016/j.ijpe.2023.109013</a>.</p>
<p><a href="http://dx.doi.org/10.13140/RG.2.2.35897.06242">Working paper</a>.</p>
<h1>About the paper</h1>
<p><strong>DISCLAIMER</strong>: The models in this paper are also discussed in detail in the <a href="https://openforecast.org/adam/">ADAM monograph</a> (<a href="https://openforecast.org/adam/ADAMIntermittent.html">Chapter 13</a>) with some examples going beyond what is discussed in the paper (e.g. models with trends).</p>
<p>What is &#8220;intermittent demand&#8221;? It is the demand that happens at irregular frequency (i.e. at random). Note that according to this definition, intermittent demand does not need to be count &#8211; it is a wider term than that. For example, electricity demand can be intermittent, but it is definitely not count. The definition above means that we do not necessarily know when specifically we will sell our product. From the modelling point of view, it means that we need to take into account two elements of uncertainty instead of just one:</p>
<ol>
<li>How much people will buy;</li>
<li>When they will buy.</li>
</ol>
<p>(1) is familiar for many demand planners and data scientists: we do not know specifically how much our customers will buy in the future, but we can get an estimate of the expected demand (mean value via a point forecast) and an idea of the uncertainty around it (e.g. produce prediction intervals or estimate the demand distribution). (2) is less obvious: there may be some periods when nobody buys our product, and then periods when we sell some, followed by no sales again. In that case we can encode the no sales in those &#8220;dry&#8221; periods with zeroes, the periods with demand as ones, and end up with a time series like this (this idea was briefly discussed in <a href="/en/2020/01/13/what-about-all-those-zeroes-measuring-performance-of-models-on-intermittent-demand/">this</a> and <a href="/en/2018/09/18/smooth-package-for-r-intermittent-state-space-model-part-i-introducing-the-model/">this</a> posts):</p>
<div id="attachment_3230" style="width: 610px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOccurrence.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3230" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOccurrence.png&amp;nocache=1" alt="An example of the occurrence part of an intermittent demand" width="600" height="350" class="size-full wp-image-3230" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOccurrence.png&amp;nocache=1 1200w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOccurrence-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOccurrence-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOccurrence-768x448.png&amp;nocache=1 768w" sizes="auto, (max-width: 600px) 100vw, 600px" /></a><p id="caption-attachment-3230" class="wp-caption-text">An example of the occurrence part of an intermittent demand</p></div>
<p>The plot above visualises the demand occurrence, with zeroes corresponding to the situation of &#8220;no demand&#8221; and ones corresponding to some demand. In general, it is is challenging to predict, when the &#8220;ones&#8221; will happen specifically, but in the case above, it seems that over time the frequency of demand increases, implying that maybe it becomes regular. In mathematical terms, we could phrase this as the probability of occurrence increases over time: at the end of series, we won&#8217;t necessarily sell product, but the chance of selling is much higher than in the beginning. The original time series looks like this:</p>
<div id="attachment_3231" style="width: 610px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOverall.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3231" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOverall.png&amp;nocache=1" alt="An example of an intermittent demand" width="600" height="350" class="size-full wp-image-3231" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOverall.png&amp;nocache=1 1200w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOverall-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOverall-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/IntermittentDemandOverall-768x448.png&amp;nocache=1 768w" sizes="auto, (max-width: 600px) 100vw, 600px" /></a><p id="caption-attachment-3231" class="wp-caption-text">An example of an intermittent demand</p></div>
<p>It shows that indeed there is an increase of the frequency of sales together with the amount sold, and that it seems that the product is becoming more popular, moving from the intermittent to the regular demand domain.</p>
<p>In general, forecasting intermittent demand is a challenging task, but there are many existing approaches that can be used in this case. However, they are all detached from the conventional ones that are used for regular demand (such as ETS or ARIMA). What people usually do in practice is first categorise the data into regular and intermittent and then apply specific approaches to it (e.g. ETS/ARIMA for the regular demand, and <a href="https://doi.org/10.2307/3007885">Croston</a>&#8216;s method or <a href="https://doi.org/10.1016/j.ejor.2011.05.018">TSB</a> for the intermittent one).</p>
<p>John Boylan and I developed a statistical model that unites the two worlds &#8211; you no longer need to decide whether the data is intermittent or not, you can just use one model in an automated fashion &#8211; it will take care of intermittence (if there is one). It relies fundamentally on the classical Croston&#8217;s equation:<br />
\begin{equation} \label{eq:general}<br />
	y_t = o_t z_t ,<br />
\end{equation}<br />
where \(y_t\) is the observed value at time \(t\), \(o_t\) is the binary occurrence variable and \(z_t\) is the demand sizes variable. Trying to derive the statistical model underlying Croston&#8217;s method, <a href="https://doi.org/10.1016/S0377-2217(01)00231-4">Snyder (2002)</a> and <a href="https://doi.org/10.1002/for.963">Shenstone &#038; Hyndman (2005)</a> used models based on \eqref{eq:general} but instead of plugging in a multiplicative ETS in \(z_t\) they got stuck with the idea of logarithmic transformation of demand sizes and/or using count distributions for the demand sizes. John and I looked into this equation again and decided that we can model both demand sizes and demand occurrence using a pair of <a href="https://openforecast.org/adam/ADAMETSPureMultiplicativeChapter.html">pure multiplicative ETS models</a>. In this post, I will focus on ETS(M,N,N) as the simplest model, but more complicated ones (with trend and/or explanatory variables) can be used as well without the loss in logic. So, for the demand sizes we will have:<br />
\begin{equation}<br />
	\begin{aligned}<br />
		&#038; z_t = l_{t-1} (1 + \epsilon_t) \\<br />
		&#038; l_t = l_{t-1} (1 + \alpha \epsilon_t)<br />
	\end{aligned}<br />
 \label{eq:demandSizes}<br />
\end{equation}<br />
where \(l_t\) is the level of series, \(\alpha\) is the smoothing parameter and \(1 + \epsilon_t \) is the error term that follows some positive distribution (the options we considered in the paper are the Log-Normal, Gamma and Inverse Gaussian). The demand sizes part is relatively straightforward: you just apply the conventional pure multiplicative ETS model with a positive distribution (which makes \(z_t\) always positive) and that&#8217;s it. However, the occurrence part is more complicated.</p>
<p>Given that the occurrence variable is random, we should model the probability of occurrence. We proposed to assume that \(o_t \sim \mathrm{Bernoulli}(p_t) \) (logical assumption, done in many other papers), meaning that the probability of occurrence changes over time. In turn, the changing probability can be modelled using one of the several approaches that we proposed. For example, it can be modelled via the so called &#8220;inverse odds ratio&#8221; model with ETS(M,N,N), formulated as:<br />
\begin{equation}<br />
	\begin{aligned}<br />
		&#038; p_t = \frac{1}{1 + \mu_{b,t}} \\<br />
		&#038; \mu_{b,t} = l_{b,t-1} \\<br />
		&#038; l_{b,t} = l_{b,t-1} (1 + \alpha_b \epsilon_{b,t})<br />
	\end{aligned}<br />
 \label{eq:demandOccurrenceOdds}<br />
\end{equation}<br />
where \(\mu_{b,t}\) is the one step ahead expectation of the underlying model, \(l_{b,t}\) is the latent level, \(\alpha_b\) is the smoothing parameter of the model, and \(1+\epsilon_{b,t}\) is the positively distributed error term (with expectation equal to one and an unknown distribution, which we actually do not care about). The main feature of the inverse odds ratio occurrence model is that it should be effective in cases when demand is building up (moving from the intermittent to the regular pattern, without zeroes). In our paper we show how such model can be estimated and also show that Croston&#8217;s method can be used for the estimation of this model when the demand occurrence does not change (substantially) between the non-zero demands. So, this model can be considered as the model underlying Croston&#8217;s method.</p>
<p>Uniting the equations \eqref{eq:general}, \eqref{eq:demandSizes} and \eqref{eq:demandOccurrenceOdds}, we get the iETS(M,N,N)\(_\mathrm{I}\)(M,N,N) model, where the letters in the first brackets correspond to the demand sizes part, the subscript &#8220;I&#8221; tells us that we have the &#8220;inverse odds ratio&#8221; model for the occurrence, and the second brackets show what ETS model was used in the demand occurrence model. The paper explains in detail how this model can be built and estimated.</p>
<p>In the very same paper we discuss other potential models for demand occurrence (more suitable for demand obsolescence or fixed probability of occurrence) and, in fact, in my opinion this part is the main contribution of the paper &#8211; we have looked into something no one did before: how to model demand occurrence using ETS. Having so many options, we might need to decide which to use in an automated fashion. Luckily, given that these models are formulated in one and the same framework, we can use information criteria to select the most suitable one for the data. Furthermore, when all probabilities of occurrence are equal to one, the model \eqref{eq:general} together with \eqref{eq:demandSizes} transforms into the conventional ETS(M,N,N) model. This also means that the regular ETS model can be compared with the iETS directly using information criteria to decide whether the occurrence part is needed or not. So, we end up with a relatively simple framework that can be used for any type of demand without a need to do a categorisation.</p>
<p>As a small side note, we also showed in the paper that the estimates of smoothing parameters for the demand sizes in iETS will always be positively biased (being higher than needed). In fact, this bias appears in any intermittent demand model that assumes that the potential demand sizes change between the non-zero observations (reasonable assumption for any modelling approach). In a way, this finding also applies to both Croston&#8217;s and TSB methods and agrees with similar finding by <a href="https://doi.org/10.1016/j.ijpe.2014.06.007">Kourentzes (2014)</a>.</p>
<h1>Example in R</h1>
<p>All the models from the paper are implemented in the <code>adam()</code> function from the <code>smooth</code> package in R (with the <code>oes()</code> function taking care of the occurrence, see details <a href="https://openforecast.org/adam/ADAMIntermittent.html">here</a> and <a href="https://cran.r-project.org/web/packages/smooth/vignettes/oes.html">here</a>). For the demonstration purposes (and for fun), we will consider an artificial example of the demand obsolescence, modelled via the &#8220;Direct probability&#8221; iETS model (it underlies the TSB method):</p>
<pre class="decode">set.seed(7)
c(rpois(10,3),rpois(10,2),rpois(10,1),rpois(10,0.5),rpois(10,0.1)) |>
    ts(frequency=12) -> y</pre>
<p>My randomly generated time series looks like this:</p>
<div id="attachment_3247" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsolescenceExample.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3247" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsolescenceExample-300x175.png&amp;nocache=1" alt="Demand becoming obsolete" width="300" height="175" class="size-medium wp-image-3247" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsolescenceExample-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsolescenceExample-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsolescenceExample-768x448.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsolescenceExample.png&amp;nocache=1 1200w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-3247" class="wp-caption-text">Demand becoming obsolete</p></div>
<p>In practice, in the example above, we can be interested in deciding, whether to discontinue the product (to save money on stocking it) or not. To model and forecast the demand above, we can use the following code in R:</p>
<pre class="decode">library(smooth)
iETSModel <- adam(y, "YYN", occurrence="direct", h=5, holdout=TRUE)</pre>
<p>The "YYN" above tells function to select the best pure multiplicative ETS model based on the information criterion (AICc by default, see discussion in <a href="https://openforecast.org/adam/ETSSelection.html">Section 15.1</a> of the ADAM monograph), the "occurrence" variable specifies, which of the demand occurrence models to build. By default, the function will use the same model for the demand probability as the selected for the demand sizes. So, for example, if we end up with ETS(M,M,N) for demand sizes, the function will use ETS(M,M,N) for the probability of occurrence. If you want to change this, you would need to use the <code>oes()</code> function and specify the model there (see examples in <a href="https://openforecast.org/adam/IntermittentExample.html">Section 13.4</a> of the ADAM monograph). Finally, I've asked function to produce 5 steps ahead forecasts and to keep the last 5 observations in the holdout sample. I ended up having the following model:</p>
<pre class="decode">summary(iETSModel)</pre>
<pre>Model estimated using adam() function: iETS(MMN)
Response variable: y
Occurrence model type: Direct
Distribution used in the estimation: 
Mixture of Bernoulli and Gamma
Loss function type: likelihood; Loss function value: 71.0549
Coefficients:
      Estimate Std. Error Lower 2.5% Upper 97.5%  
alpha   0.1049     0.0925     0.0000      0.2903  
beta    0.1049     0.0139     0.0767      0.1049 *
level   4.3722     1.1801     1.9789      6.7381 *
trend   0.9517     0.0582     0.8336      1.0685 *

Error standard deviation: 1.0548
Sample size: 45
Number of estimated parameters: 9
Number of degrees of freedom: 36
Information criteria:
     AIC     AICc      BIC     BICc 
202.6527 204.1911 218.9126 206.6142 </pre>
<p>As we see from the output above, the function has selected the iETS(M,M,N) model for the data. The line "Mixture of Bernoulli and Gamma" tells us that the Bernoulli distribution was used for the demand occurrence (this is the only option), while the Gamma distribution was used for the demand sizes (this is the default option, but you can change this via the <code>distribution</code> parameter). We can then produce forecasts from this model:</p>
<pre class="decode">forecast(iETSModel, h=5, interval="prediction", side="upper") |>
    plot()</pre>
<p>In the code above, I have asked the function to generate prediction intervals (by default, for the pure multiplicative models, the function <a href="https://openforecast.org/adam/ADAMForecastingPI.html#ADAMForecastingPISimulations">uses simulations</a>) and to produce only the upper bound of the interval. The latter is motivated by the idea that in the case of the intermittent demand, the lower bound is typically not useful for decision making: we know that the demand cannot be below zero, and our stocking decisions are typically made based on the specific quantiles (e.g. for the 95% confidence level). Here is the plot that I get after running the code above:</p>
<div id="attachment_3250" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleForecast.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3250" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleForecast-300x175.png&amp;nocache=1" alt="Point and interval forecasts for the demand becoming obsolete" width="300" height="175" class="size-medium wp-image-3250" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleForecast-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleForecast-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleForecast-768x448.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleForecast.png&amp;nocache=1 1200w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-3250" class="wp-caption-text">Point and interval forecasts for the demand becoming obsolete</p></div>
<p>While the last observation in the holdout was not included in the prediction interval, the dynamics captured by the model is correct. The question that we should ask ourselves in this example is: what decision can be made based on the model? If you want to decide whether to stock the product or not, you can look at the forecast of the probability of occurrence to see how it changes over time and decide, whether to discontinue the product:</p>
<pre class="decode">forecast(iETSModel$occurrence, h=5) |> plot()</pre>
<div id="attachment_3254" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleOccurrence.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3254" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleOccurrence-300x175.png&amp;nocache=1" alt="Forecast of the probability of occurrence for the demand becoming obsolete" width="300" height="175" class="size-medium wp-image-3254" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleOccurrence-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleOccurrence-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleOccurrence-768x448.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/09/ObsoleteExampleOccurrence.png&amp;nocache=1 1200w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-3254" class="wp-caption-text">Forecast of the probability of occurrence for the demand becoming obsolete</p></div>
<p>In our case, the probability reaches roughly 0.2 over the next 5 months (i.e. we might sale once every 5 months). If we think that this is too low then we should discontinue the product. Otherwise, if we decide to continue selling the product, then it makes more sense to generate the desired quantile of the cumulative demand over the lead time. In case of the <code>adam()</code> function it can be done by adding <code>cumulative=TRUE</code> in the <code>forecast()</code> function:</p>
<pre class="decode">forecast(iETSModel, h=5, interval="prediction", side="upper", cumulative=TRUE)</pre>
<p>after which we get:</p>
<pre>      Point forecast Upper bound (95%)
Oct 4      0.3055742          1.208207</pre>
<p>From the decision point of view, if we deal with count demand, the value 1.208207 complicates things. Luckily, as we showed in our paper, we can round the value up to get something meaningful, preserving the properties of the model. This means, that based on the estimated model, we need to have two items in stock to satisfy the demand over the next 5 months with the confidence level of 95%.</p>
<h2>Conclusions</h2>
<p>This is just a demonstration of what can be done with the proposed iETS model, but there are many more things one can do. For example, this approach allows capturing multiplicative seasonality in data that has zeroes (as long as seasonal indices can be estimated somehow). John and I started thinking in this direction, and we even did some work together with <a href="https://www.inesctec.pt/en/people/patricia-ramos">Patricia Ramos</a> (our colleague from the university of INESC TEC), but given the hard time that was given to our paper by the reviewers in IJF, we had to postpone this research. I also used the ideas explained in this post in the <a href="/en/2023/05/09/probabilistic-forecasting-of-hourly-emergency-department-arrivals/">paper on ED forecasting</a> (written together with Bahman and Jethro). In that paper, I have used a seasonal model with the "direct" occurrence part, which tool care of zeroes (not bothering with modelling them properly) and allowed me to apply a multiple seasonal multiplicative ETS model with explanatory variables. Anyway, the proposed approach is flexible enough to be used in variety of contexts, and I think it will have many applications in real life.</p>
<h2>P.S.: Story of the paper</h2>
<p>I've written a separate long post, explaining the revision process of the paper and how it got to the acceptance stage at the IJPE, but then I realised that it is too long and boring. Besides, John would not have approved of the post and would say that I am sharing the unnecessary details, creating potential exasperation for fellow forecasters who reviewed the paper. So, I have decided not to publish that post, and instead just to add a short subsection. Here it is.</p>
<p>We started working on the paper in March 2016 and submitted it to the International Journal of Forecasting (IJF) in January 2017. It went through <strong>four</strong> rounds of revision with the second reviewer throughout the way being very critical, unsupportive and driving the paper into a wrong direction, burying it in the discussion of petty statistical details. We rewrote the paper several times and I rewrote the R code of the function few times. In the end the Associate Editor (AE) of the IJF (who completely forgot about our paper for several months) decided not to send the paper to the reviewers again, completely ignored our responses to the reviewers, did not provide any major feedback and have written an insulting response that ended with the phrase "I could go on, but I’m out of patience with the authors and their paper". The paper was rejected from IJF in 2019, which set me back in my academic career. This together with constant rejections of my <a href="/en/2022/08/02/the-long-and-winding-road-the-story-of-complex-exponential-smoothing/">Complex Exponential Smoothing</a> paper and actions of a colleague of mine who decided to cut all ties with me in Summer 2019, hit my self-esteem and caused a serious damage to my professional life. I thought of quitting academia and to either starting working in business or doing something different with my life, not related to forecasting at all. I stayed mainly because of all the support that John Boylan, Robert Fildes, Nikos Kourentzes and my wife Anna Sroginis provided me. I recovered from that hit only in 2022, when my <a href="https://openforecast.org/en/2022/08/02/complex-exponential-smoothing/">Complex Exponential Smoothing</a> paper got accepted and things finally started turning well. After that John and I have rewritten the paper again, split it into two: "iETS" and "Multiplicative ETS" (under revision in IMA Journal of Management Mathematics) and submitted the former to the International Journal of Production Economics, where after one round of revision it got accepted. Unfortunately, we never got to celebrate the success with John because <a href="/en/2023/07/21/john-e-boylan/">he passed away</a>.</p>
<p>The moral of this story is that publishing in academia can be very tough and unfair. Sometimes, you get a very negative feedback from the people you least expect to get it from. People that you respect and think very highly of might not understand what you are proposing and be very unsupportive. We actually knew who the reviewers and the AE of our IJF paper were - they are esteemed academics in the field of forecasting. And while I still think highly of their research and contributions to the field, the way the second reviewer and the AE handled the review has damaged my personal respect to them - I never expected them to be so narrow-minded...</p>
<p>Message <a href="https://openforecast.org/2023/09/08/iets-state-space-model-for-intermittent-demand-forecasting/">iETS: State space model for intermittent demand forecasting</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2023/09/08/iets-state-space-model-for-intermittent-demand-forecasting/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Story of &#8220;Probabilistic forecasting of hourly emergency department arrivals&#8221;</title>
		<link>https://openforecast.org/2023/05/10/story-of-probabilistic-forecasting-of-hourly-emergency-department-arrivals/</link>
					<comments>https://openforecast.org/2023/05/10/story-of-probabilistic-forecasting-of-hourly-emergency-department-arrivals/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Wed, 10 May 2023 20:47:27 +0000</pubDate>
				<category><![CDATA[adam()]]></category>
		<category><![CDATA[Applied forecasting]]></category>
		<category><![CDATA[ETS]]></category>
		<category><![CDATA[R]]></category>
		<category><![CDATA[Regression]]></category>
		<category><![CDATA[Stories]]></category>
		<category><![CDATA[Univariate models]]></category>
		<category><![CDATA[ADAM]]></category>
		<category><![CDATA[papers]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=3092</guid>

					<description><![CDATA[<p>The paper Back in 2020, when we were all siting in the COVID lockdown, I had a call with Bahman Rostami-Tabar to discuss one of our projects. He told me that he had an hourly data of an Emergency Department from a hospital in Wales, and suggested writing a paper for a healthcare audience to [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2023/05/10/story-of-probabilistic-forecasting-of-hourly-emergency-department-arrivals/">Story of &#8220;Probabilistic forecasting of hourly emergency department arrivals&#8221;</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><a href="/en/2023/05/09/probabilistic-forecasting-of-hourly-emergency-department-arrivals/">The paper</a></p>
<p>Back in 2020, when we were all siting in the COVID lockdown, I had a call with <a href="https://www.bahmanrt.com/">Bahman Rostami-Tabar</a> to discuss one of our projects. He told me that he had an hourly data of an Emergency Department from a hospital in Wales, and suggested writing a paper for a healthcare audience to show them how forecasting can be done properly in this setting. I noted that we did not have experience in working with high frequency data, and it would be good to have someone with relevant expertise. I knew a guy who worked in energy forecasting, <a href="http://www.jethrobrowell.com/">Jethro Browell</a> (we are mates in the <a href="https://forecasters.org/programs/communities/united-kingdom-chapter/">IIF UK Chapter</a>), so we had a chat between the three of us and formed a team to figure out better ways for ED arrival demand forecasting.</p>
<p>We agreed that each one of us will try their own models. Bahman wanted to try TBATS, Prophet and models from the <a href="https://github.com/tidyverts/fasster">fasster</a> package in R (spoiler: the latter ones produced very poor forecasts on our data, so we removed them from the paper). Jethro had a pool of <a href="https://www.gamlss.com/" rel="noopener" target="_blank">GAMLSS</a> models with different distributions, including Poisson and truncated Normal. He also tried a Gradient Boosting Machine (GBM). I decided to test ETS, Poisson Regression and <a href="https://openforecast.org/adam/" rel="noopener" target="_blank">ADAM</a>. We agreed that we will measure performance of models not only in terms of point forecasts (using RMSE), but also in terms of quantiles (pinball and quantile bias) and computational time. It took us a year to do all the experiments and another one to find a journal that would not desk-reject our paper because the editor thought that it was not relevant (even though they have published similar papers in the past). It was rejected from Annals of Emergency Medicine, Emergency Medicine Journal, American Journal of Emergency Medicine and Journal of Medical Systems. In the end, we submitted to Health Systems, and after a short revision the paper got accepted. So, there is a happy end in this story.</p>
<p>In the paper itself, we found that overall, in terms of quantile bias (calibration of models), GAMLSS with truncated Normal distribution and ADAM performed better than the other approaches, with the former also doing well in terms of pinball loss and the latter doing well in terms of point forecasts (RMSE). Note that the count data models did worse than the continuous ones, although one would expect Poisson distribution to be appropriate for the ED arrivals.</p>
<p>I don&#8217;t want to explain the paper and its findings in detail in this post, but given my relation to ADAM, I have decided to briefly explain what I included in the model and how it was used. After all, this is the first paper that uses almost all the main features of ADAM and shows how powerful it can be if used correctly.</p>
<h3>Using ADAM in Emergency Department arrivals forecasting</h3>
<p><strong>Disclaimer</strong>: The explanation provided here relies on the content of my monograph &#8220;<a href="https://openforecast.org/adam/">Forecasting and Analytics with ADAM</a>&#8220;. In the paper, I ended up creating a quite complicated model that allowed capturing complex demand dynamics. In order to fully understand what I am discussing in this post, you might need to refer to the monograph.</p>
<div id="attachment_3117" style="width: 1210px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3117" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data.png&amp;nocache=1" alt="Emergency Department Arrivals" width="1200" height="800" class="size-full wp-image-3117" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data.png&amp;nocache=1 1200w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data-300x200.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data-1024x683.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data-768x512.png&amp;nocache=1 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></a><p id="caption-attachment-3117" class="wp-caption-text">Emergency Department Arrivals. The plots were generated using <code>seasplot()</code> function from the <code>tsutils</code> package.</p></div>
<p>The figure above shows the data that we were dealing with together with several seasonal plots (generated using <code>seasplot()</code> function from the <code>tsutils</code> package). As we see, the data exhibits hour of day, day of week and week of year seasonalities, although some of them are not very well pronounced. The data does not seem to have a strong trend, although there is a slow increase of the level. Based on this, I decided to use ETS(M,N,M) as the basis for modelling. However, if we want to capture all three seasonal patterns then we need to fit a triple seasonal model, which requires too much computational time, because of the estimation of all the seasonal indices. So, I have decided to use a <a href="https://openforecast.org/adam/ADAMMultipleFrequencies.html">double-seasonal ETS(M,N,M)</a> instead with hour of day and hour of week seasonalities and to include <a href="https://openforecast.org/adam/ETSXMultipleSeasonality.html">dummy variables for week of year seasonality</a>. The alternative to week of year dummies would be hour of year seasonal component, which would then require estimating 8760 seasonal indices, potentially overfitting the data. I argue that the week of year dummy provides the sufficient flexibility and there is no need in capturing the detailed intra-yearly profile on a more granular level.</p>
<p>To make things more exciting, given that we deal with hourly data of a UK hospital, we had to deal with issues of <a href="https://openforecast.org/adam/MultipleFrequenciesDSTandLeap.html">daylight saving and leap year</a>. I know that many of us hate the idea of daylight saving, because we have to change our lifestyles 2 times each year just because of an old 18th century tradition. But in addition to being <a href="https://publichealth.jhu.edu/2023/7-things-to-know-about-daylight-saving-time#:~:text=Making%20the%20shift%20can%20increase,a%20professor%20in%20Mental%20Health.">bad for your health</a>, this nasty thing messes things up for my models, because once a year we have 23 hours and in another time we have 25 hours in a day. Luckily, it is taken care of by <code>adam()</code> that shifts seasonal indices, when the time change happens. All you need to do for this mechanism to work is to provide an object with timestamps to the function (for example, zoo). As for the leap year, it becomes less important when we model week of year seasonality instead of the day of year or hour of year one.</p>
<div id="attachment_3123" style="width: 1210px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data-daily.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3123" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data-daily.png&amp;nocache=1" alt="Emergency Department Daily Arrivals" width="1200" height="700" class="size-full wp-image-3123" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data-daily.png&amp;nocache=1 1200w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data-daily-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data-daily-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/05/EDArrivals-data-daily-768x448.png&amp;nocache=1 768w" sizes="auto, (max-width: 1200px) 100vw, 1200px" /></a><p id="caption-attachment-3123" class="wp-caption-text">Emergency Department Daily Arrivals</p></div>
<p>Furthermore, as it can be seen from the figure above, it is apparent that <a href="https://openforecast.org/adam/ADAMX.html">calendar events</a> play a crucial role in ED arrivals. For example, the Emergency Department demand over Christmas is typically lower than average (the drops in Figure above), but right after the Christmas it tends to go up (with all the people who injured themselves during the festivities showing up in the hospital). So these events need to be taken into account in a form of additional dummy variables by a model together with their lags (the 24 hour lags of the original variables).</p>
<p>But that&#8217;s not all. If we want to fit a multiplicative seasonal model (which makes more sense than the additive one due to changing seasonal amplitude for different times of year), we need to do something with zeroes, which happen naturally in ED arrivals over night (see the first figure in this post with seasonal plots). They do not necessarily happen at the same time of day, but the probability of having no demand tends to increase at night. This meant that I needed to introduce the <a href="https://openforecast.org/adam/ADAMIntermittent.html">occurrence part of the model</a> to take care of zeroes. I used a very basic occurrence model called &#8220;<a href="https://openforecast.org/adam/ADAMOccurrence.html#oETSD">direct probability</a>&#8220;, because it is more sensitive to changes in demand occurrence, making the model more responsive. I did not use a seasonal demand occurrence model (and I don&#8217;t remember why), which is one of the limitations of ADAM used in this study.</p>
<p>Finally, given that we are dealing with low volume data, a positive distribution needed to be used instead of the Normal one. I used <a href="https://openforecast.org/adam/ADAMETSMultiplicativeDistributions.html">Gamma distribution</a> because it is better behaved than the Log Normal or the Inverse Gaussian, which tend to have much heavier tails. In the exploration of the data, I found that Gamma does better than the other two, probably because the ED arrivals have relatively slim tails.</p>
<p>So, the final ADAM included the following features:</p>
<ul>
<li>ETS(M,N,M) as the basis;</li>
<li>Double seasonality;</li>
<li>Week of year dummy variables;</li>
<li>Dummy variables for calendar events with their lags;</li>
<li>&#8220;Direct probability&#8221; occurrence model;</li>
<li>Gamma distribution for the residuals of the model.</li>
</ul>
<p>This model is summarised in equation (3) of <a href="/en/2023/05/09/probabilistic-forecasting-of-hourly-emergency-department-arrivals/">the paper</a>.</p>
<p>The model was <a href="https://openforecast.org/adam/ADAMInitialisation.html">initialised using backcasting</a>, because otherwise we would need to estimate too many initial values for the state vector. The estimation itself was done using <a href="https://openforecast.org/adam/ADAMETSEstimationLikelihood.html">likelihood</a>. In R, this corresponded to roughly the following lines of code:</p>
<pre class="decode">library(smooth)
oesModel <- oes(y, "MNN", occurrence="direct", h=48)
adamModelFirst <- adam(ourData, "MNM", lags=c(24,24*7), formula=y~x+xLag24+weekOfYear,
                       h=48, initial="backcasting",
                       occurrence=oesModel, distribution="dgamma")</pre>
<p>Where <code>x</code> was the categorical variable (factor in R) with all the main calendar events. However, even with backcasting, the estimation of such a big model took an hour and 25 minutes. Given that Bahman, Jethro and I have agreed to do rolling origin evaluation, I've decided to help the function in the estimation inside the loop, providing <a href="https://openforecast.org/adam/ADAMInitialisation.html#starting-optimisation-of-parameters">the initials to the optimiser</a> based on the very first estimated model. As a result, each estimation of ADAM in the rolling origin took 1.5 minutes. The code in the loop was modified to:</p>
<pre class="decode">adamParameters <- coef(adamModelFirst)
oesModel <- oes(y, "MNN", occurrence="direct", h=48)
adamModel <- adam(ourData, "MNM", lags=c(24,24*7), formula=y~x+xLag24+weekOfYear,
                  h=48, initial="backcasting",
                  occurrence=oesModel, distribution="dgamma",
                  B=adamParameters)</pre>
<p>Finally, we generated mean and quantile forecasts for 48 hours ahead. I used <a href="https://openforecast.org/adam/ADAMForecastingPI.html#semiparametric-intervals">semiparametric quantiles</a>, because I expected violation of some of assumptions in the model (e.g. autocorrelated residuals). The respective R code is:</p>
<pre class="decode">testForecast <- forecast(adamModel, newdata=newdata, h=48,
                         interval="semiparametric", level=c(1:19/20), side="upper")</pre>
<p>Furthermore, given that the data is integer-valued (how many people visit the hospital each hour) and ADAM produces fractional quantiles (because of the Gamma distribution), I decided to see how it would perform if the quantiles were rounded up. This strategy is simple and might be sensible when a continuous model is used for forecasting on a count data (see discussion in the paper). However, after running the experiment, the ADAM with rounded up quantiles performed very similar to the conventional one, so we have decided not to include it in the paper.</p>
<p>In the end, as stated earlier in this post, we concluded that in our experiment, there were two well performing approaches: GAMLSS with Truncated Normal distribution (called "NOtr-2" in the paper) and ADAM in the form explained above. The popular TBATS, Prophet and Gradient Boosting Machine performed poorly compared to these two approaches. For the first two, this is because of the lack of explanatory variables and inappropriate distributional assumptions (normality). As for the GBM, this is probably due to the lack of dynamic element in it (e.g. changing level and seasonal components).</p>
<p>Concluding this post, as you can see, I managed to fit a decent model based on ADAM, which captured the main characteristics of the data. However, it took a bit of time to understand what features should be included, together with some experiments on the data. This case study shows that if you want to get a better model for your problem, you might need to dive in the problem and spend some time analysing what you have on hands, experimenting with different parameters of a model. ADAM provides the flexibility necessary for such experiments.</p>
<p>Message <a href="https://openforecast.org/2023/05/10/story-of-probabilistic-forecasting-of-hourly-emergency-department-arrivals/">Story of &#8220;Probabilistic forecasting of hourly emergency department arrivals&#8221;</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2023/05/10/story-of-probabilistic-forecasting-of-hourly-emergency-department-arrivals/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>smooth v3.2.0: what&#8217;s new?</title>
		<link>https://openforecast.org/2023/01/30/smooth-v3-2-0-what-s-new/</link>
					<comments>https://openforecast.org/2023/01/30/smooth-v3-2-0-what-s-new/#comments</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Mon, 30 Jan 2023 13:06:47 +0000</pubDate>
				<category><![CDATA[About es() function]]></category>
		<category><![CDATA[adam()]]></category>
		<category><![CDATA[ARIMA]]></category>
		<category><![CDATA[ETS]]></category>
		<category><![CDATA[Package smooth for R]]></category>
		<category><![CDATA[R]]></category>
		<category><![CDATA[Regression]]></category>
		<category><![CDATA[Univariate models]]></category>
		<category><![CDATA[ADAM]]></category>
		<category><![CDATA[smooth]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=3063</guid>

					<description><![CDATA[<p>smooth package has reached version 3.2.0 and is now on CRAN. While the version change from 3.1.7 to 3.2.0 looks small, this has introduced several substantial changes and represents a first step in moving to the new C++ code in the core of the functions. In this short post, I will outline the main new [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2023/01/30/smooth-v3-2-0-what-s-new/">smooth v3.2.0: what&#8217;s new?</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>smooth package has reached version 3.2.0 and is now <a href="https://cran.r-project.org/package=smooth">on CRAN</a>. While the version change from 3.1.7 to 3.2.0 looks small, this has introduced several substantial changes and represents a first step in moving to the new C++ code in the core of the functions. In this short post, I will outline the main new features of smooth 3.2.0.</p>
<p><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/smooth2.png&amp;nocache=1"><img loading="lazy" decoding="async" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/smooth2-300x218.png&amp;nocache=1" alt="" width="300" height="218" class="aligncenter size-medium wp-image-3065" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/smooth2-300x218.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/smooth2-1024x745.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/smooth2-768x559.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/smooth2-1536x1117.png&amp;nocache=1 1536w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/smooth2.png&amp;nocache=1 1650w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a></p>
<h3>New engines for ETS, MSARIMA and SMA</h3>
<p>The first and one of the most important changes is the new engine for the ETS (Error-Trend-Seasonal exponential smoothing model), MSARIMA (Multiple Seasonal ARIMA) and SMA (Simple Moving Average), implemented respectively in <code>es()</code>, <code>msarima()</code> and <code>sma()</code> functions. The new engine was developed for <code>adam()</code> and the three models above can be considered as special cases of it. You can read more about ETS in ADAM monograph, starting from<a href="https://openforecast.org/adam/ETSConventional.html"> Chapter 4</a>; MSARIMA is discussed in <a href="https://openforecast.org/adam/ADAMARIMA.html">Chapter 9</a>, while SMA is briefly discussed in <a href="https://openforecast.org/adam/simpleForecastingMethods.html#SMA">Subsection 3.3.3</a>.</p>
<p>The <code>es()</code> function now implements the ETS close to the conventional one, assuming that the error term follows normal distribution. It still supports explanatory variables (discussed in <a href="https://openforecast.org/adam/ADAMX.html">Chapter 10 of ADAM monograph</a>) and advanced estimators (<a href="https://openforecast.org/adam/ADAMETSEstimation.html">Chapter 11</a>), and it has the same syntax as the previous version of the function had, but now acts as a wrapper for <code>adam()</code>. This means that it is now faster, more accurate and requires less memory than it used to. <code>msarima()</code> being a wrapper of <code>adam()</code> as well, is now also faster and more accurate than it used to be. But in addition to that both functions now support the methods that were developed for <code>adam()</code>, including <code>vcov()</code>, <code>confint()</code>, <code>summary()</code>, <code>rmultistep()</code>, <code>reapply()</code>, <code>plot()</code> and others. So, now you can do more thorough analysis and improve the models using all these advanced instruments (see, for example, <a href="https://openforecast.org/adam/diagnostics.html">Chapter 14 of ADAM</a>).</p>
<p>The main reason why I moved the functions to the new engine was to clean up the code and remove the old chunks that were developed when I only started learning C++. A side effect, as you see, is that the functions have now been improved in a variety of ways.</p>
<p>And to be on the safe side, the old versions of the functions are still available in <code>smooth</code> under the names <code>es_old()</code>, <code>msarima_old()</code> and <code>sma_old()</code>. They will be removed from the package if it ever reaches the v.4.0.0.</p>
<h3>New methods for ADAM</h3>
<p>There are two new methods for <code>adam()</code> that can be used in a variety of cases. The first one is <code>simulate()</code>, which will generate data based on the estimated ADAM, whatever the original model is (e.g. mixture of ETS, ARIMA and regression on the data with multiple frequencies). Here is how it can be used:</p>
<pre class="decode">adam(BJsales, "AAdN") |>
     simulate() |>
     plot()</pre>
<p>which will produce a plot similar to the following:</p>
<div id="attachment_3077" style="width: 650px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamSimulate.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3077" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamSimulate-1024x597.png&amp;nocache=1" alt="Simulated data based on adam() applied to Box-Jenkins sales data" width="640" height="373" class="size-large wp-image-3077" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamSimulate-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamSimulate-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamSimulate-768x448.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamSimulate.png&amp;nocache=1 1200w" sizes="auto, (max-width: 640px) 100vw, 640px" /></a><p id="caption-attachment-3077" class="wp-caption-text">Simulated data based on adam() applied to Box-Jenkins sales data</p></div>
<p>This can be used for research, when a more controlled environment is needed. If you want to fine tune the parameters of ADAM before simulating the data, you can save the output in an object and amend its parameters. For example:</p>
<pre class="decode">testModel <- adam(BJsales, "AAdN")
testModel$persistence <- c(0.5, 0.2)
simulate(testModel)</pre>
<p>The second new method is the <code>xtable()</code> from the respective <code>xtable</code> package. It produces LaTeX version of the table from the summary of ADAM. Here is an example of a summary from ADAM ETS:</p>
<pre class="decode">adam(BJsales, "AAdN") |>
     summary()</pre>
<pre>Model estimated using adam() function: ETS(AAdN)
Response variable: BJsales
Distribution used in the estimation: Normal
Loss function type: likelihood; Loss function value: 256.1516
Coefficients:
      Estimate Std. Error Lower 2.5% Upper 97.5%  
alpha   0.9514     0.1292     0.6960      1.0000 *
beta    0.3328     0.2040     0.0000      0.7358  
phi     0.8560     0.1671     0.5258      1.0000 *
level 203.2835     5.9968   191.4304    215.1289 *
trend  -2.6793     4.7705   -12.1084      6.7437  

Error standard deviation: 1.3623
Sample size: 150
Number of estimated parameters: 6
Number of degrees of freedom: 144
Information criteria:
     AIC     AICc      BIC     BICc 
524.3032 524.8907 542.3670 543.8387</pre>
<p>As you can see in the output above, the function generates the confidence intervals for the parameters of the model, including the smoothing parameters, dampening parameter and the initial states. This summary can then be used to generate the LaTeX code for the main part of the table:</p>
<pre class="decode">adam(BJsales, "AAdN") |>
     xtable()</pre>
<p>which will looks something like this:</p>
<div id="attachment_3073" style="width: 650px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamXtable.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3073" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamXtable-1024x303.png&amp;nocache=1" alt="Summary of adam()" width="512" height="152" class="size-large wp-image-3073" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamXtable-1024x303.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamXtable-300x89.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamXtable-768x227.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2023/01/adamXtable.png&amp;nocache=1 1207w" sizes="auto, (max-width: 512px) 100vw, 512px" /></a><p id="caption-attachment-3073" class="wp-caption-text">Summary of adam()</p></div>
<h3>Other improvements</h3>
<p>First, one of the major changes in <code>smooth</code> functions is the new backcasting mechanism for <code>adam()</code>, <code>es()</code> and <code>msarima()</code> (this is discussed in <a href="https://openforecast.org/adam/ADAMInitialisation.html">Section 11.4 of ADAM monograph</a>). The main difference with the old one is that now it does not backcast the parameters for the explanatory variables and estimates them separately via optimisation. This feature appeared to be important for some of users who wanted to try MSARIMAX/ETSX (a model with explanatory variables) but wanted to use backcasting as the initialisation. These users then wanted to get a summary, analysing the uncertainty around the estimates of parameters for exogenous variables, but could not because the previous implementation would not estimate them explicitly. This is now available. Here is an example:</p>
<pre class="decode">cbind(BJsales, BJsales.lead) |>
    adam(model="AAdN", initial="backcasting") |>
    summary()</pre>
<pre>Model estimated using adam() function: ETSX(AAdN)
Response variable: BJsales
Distribution used in the estimation: Normal
Loss function type: likelihood; Loss function value: 255.1935
Coefficients:
             Estimate Std. Error Lower 2.5% Upper 97.5%  
alpha          0.9724     0.1108     0.7534      1.0000 *
beta           0.2904     0.1368     0.0199      0.5607 *
phi            0.8798     0.0925     0.6970      1.0000 *
BJsales.lead   0.1662     0.2336    -0.2955      0.6276  

Error standard deviation: 1.3489
Sample size: 150
Number of estimated parameters: 5
Number of degrees of freedom: 145
Information criteria:
     AIC     AICc      BIC     BICc 
520.3870 520.8037 535.4402 536.4841</pre>
<p>As you can see in the output above, the initial level and trend of the model are not reported, because they were estimated via backcasting. However, we get the value of the parameter <code>BJsales.lead</code> and the uncertainty around it. The old backcasting approach is now called "complete", implying that all values of the state vector are produce via backcasting.</p>
<p>Second, <code>forecast.adam()</code> now has a parameter <code>scenarios</code>, which when TRUE will return the simulated paths from the model. This only works when <code>interval="simulated"</code> and can be used for the analysis of possible forecast trajectories.</p>
<p>Third, the <code>plot()</code> method now can also produce ACF/PACF for the squared residuals for all <code>smooth</code> functions. This becomes useful if you suspect that your data has ARCH elements and want to see if they need to be modelled separately. This can also be done using <code>adam()</code> and <code>sm()</code> and is discussed in <a href="https://openforecast.org/adam/ADAMscaleModel.html">Chapter 17 of the monograph</a>.</p>
<p>Finally, the <code>sma()</code> function now has the <code>fast</code> parameter, which when true will use a modified Ternary search for the best order based on information criteria. It might not give the global minimum, but it works much faster than the exhaustive search.</p>
<h3>Conclusions</h3>
<p>These are the main new features in the package. I feel that the main job in <code>smooth</code> is already done, and all I can do now is just tune the functions and improve the existing code. I want to move all the functions to the new engine and ditch the old one, but this requires much more time than I have. So, I don't expect to finish this any time soon, but I hope I'll get there someday. On the other hand, I'm not sure that spending much time on developing an R package is a wise idea, given that nowadays people tend to use Python. I would develop Python analogue of the <code>smooth</code> package, but currently I don't have the necessary expertise and time to do that. Besides, there already exist great libraries, such as <a href="https://github.com/Nixtla/nixtla/tree/main/tsforecast">tsforecast</a> from <a href="https://github.com/Nixtla/nixtla">nixtla</a> and <a href="https://www.sktime.org/">sktime</a>. I am not sure that another library, implementing ETS and ARIMA is needed in Python. What do you think?</p>
<p>Message <a href="https://openforecast.org/2023/01/30/smooth-v3-2-0-what-s-new/">smooth v3.2.0: what&#8217;s new?</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2023/01/30/smooth-v3-2-0-what-s-new/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>ISF2022: How to make ETS work with ARIMA</title>
		<link>https://openforecast.org/2022/07/20/isf2022-how-to-make-ets-work-with-arima/</link>
					<comments>https://openforecast.org/2022/07/20/isf2022-how-to-make-ets-work-with-arima/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Wed, 20 Jul 2022 12:06:48 +0000</pubDate>
				<category><![CDATA[adam()]]></category>
		<category><![CDATA[ARIMA]]></category>
		<category><![CDATA[Conferences]]></category>
		<category><![CDATA[ETS]]></category>
		<category><![CDATA[ADAM]]></category>
		<category><![CDATA[conferences]]></category>
		<category><![CDATA[ISF]]></category>
		<category><![CDATA[presentations]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=2984</guid>

					<description><![CDATA[<p>This time ISF took place in Oxford. I acted as a programme chair of the event and was quite busy with schedule and some other minor organisational things, but I still found time to present something new. Specifically, I talked about one specific part of ADAM, the part implementing ETS+ARIMA. The idea is that the [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2022/07/20/isf2022-how-to-make-ets-work-with-arima/">ISF2022: How to make ETS work with ARIMA</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This time ISF took place in Oxford. I acted as a programme chair of the event and was quite busy with schedule and some other minor organisational things, but I still found time to present something new. Specifically, I talked about one specific part of ADAM, the part implementing ETS+ARIMA. The idea is that the two models are considered as competing, belonging to different families. But we have known how to unite them at least since 1985. So, it is about time to make this brave step and implement ETS with ARIMA elements.</p>
<div id="attachment_2987" style="width: 235px" class="wp-caption aligncenter"><a href="/wp-content/uploads/2022/07/7971a13b-ad97-4473-8a8f-4c88ad2d7145.jpeg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-2987" src="/wp-content/uploads/2022/07/7971a13b-ad97-4473-8a8f-4c88ad2d7145-225x300.jpeg" alt="" width="225" height="300" class="size-medium wp-image-2987" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/07/7971a13b-ad97-4473-8a8f-4c88ad2d7145-225x300.jpeg&amp;nocache=1 225w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/07/7971a13b-ad97-4473-8a8f-4c88ad2d7145-768x1024.jpeg&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/07/7971a13b-ad97-4473-8a8f-4c88ad2d7145-1152x1536.jpeg&amp;nocache=1 1152w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/07/7971a13b-ad97-4473-8a8f-4c88ad2d7145.jpeg&amp;nocache=1 1536w" sizes="auto, (max-width: 225px) 100vw, 225px" /></a><p id="caption-attachment-2987" class="wp-caption-text">ETS+ARIMA love story with happy ending&#8230;</p></div>
<p>This talk was based on <a href="https://openforecast.org/adam/ADAMARIMA.html">Chapter 9</a> of <a href="https://openforecast.org/adam/">ADAM monograph</a>, and more specifically on <a href="https://openforecast.org/adam/ETSAndARIMA.html">Section 9.4</a>.</p>
<p>The slides of the presentation are available <a href="/wp-content/uploads/2022/07/2022-ISF2022-ADAM-ETSARIMA.pdf">here</a>.</p>
<p>Message <a href="https://openforecast.org/2022/07/20/isf2022-how-to-make-ets-work-with-arima/">ISF2022: How to make ETS work with ARIMA</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2022/07/20/isf2022-how-to-make-ets-work-with-arima/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The first draft of &#8220;Forecasting and Analytics with ADAM&#8221;</title>
		<link>https://openforecast.org/2022/04/11/the-first-draft-of-forecasting-and-analytics-with-adam/</link>
					<comments>https://openforecast.org/2022/04/11/the-first-draft-of-forecasting-and-analytics-with-adam/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Mon, 11 Apr 2022 15:30:26 +0000</pubDate>
				<category><![CDATA[adam()]]></category>
		<category><![CDATA[ARIMA]]></category>
		<category><![CDATA[ETS]]></category>
		<category><![CDATA[R]]></category>
		<category><![CDATA[Regression]]></category>
		<category><![CDATA[Theory of forecasting]]></category>
		<category><![CDATA[ADAM]]></category>
		<category><![CDATA[regression]]></category>
		<category><![CDATA[smooth]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=2817</guid>

					<description><![CDATA[<p>After working on this for more than a year, I have finally prepared the first draft of my online monograph &#8220;Forecasting and Analytics with ADAM&#8220;. This is a monograph on the model that unites ETS, ARIMA and regression and introduces advanced features in univariate modelling, including: ETS in a new State Space form; ARIMA in [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2022/04/11/the-first-draft-of-forecasting-and-analytics-with-adam/">The first draft of &#8220;Forecasting and Analytics with ADAM&#8221;</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div id="attachment_2819" style="width: 222px" class="wp-caption aligncenter"><a href="/wp-content/uploads/2022/03/Adam-Title-web.jpg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-2819" src="/wp-content/uploads/2022/03/Adam-Title-web-212x300.jpg" alt="Forecasting and Analytics with ADAM" width="212" height="300" class="size-medium wp-image-2819" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/03/Adam-Title-web-212x300.jpg&amp;nocache=1 212w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/03/Adam-Title-web-724x1024.jpg&amp;nocache=1 724w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/03/Adam-Title-web-768x1087.jpg&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/03/Adam-Title-web.jpg&amp;nocache=1 1000w" sizes="auto, (max-width: 212px) 100vw, 212px" /></a><p id="caption-attachment-2819" class="wp-caption-text">Forecasting and Analytics with ADAM</p></div>
<p>After working on this for <a href="/en/2021/01/13/the-creation-of-adam-next-step-in-statistical-forecasting/" rel="noopener">more than</a> <a href="/en/2021/02/28/after-the-creation-of-adam-smooth-v3-1-0/">a year</a>, I have finally prepared the first draft of my online monograph &#8220;<a href="https://openforecast.org/adam/" rel="noopener" target="_blank">Forecasting and Analytics with ADAM</a>&#8220;. This is a monograph on the model that unites ETS, ARIMA and regression and introduces advanced features in univariate modelling, including:</p>
<ol>
<li>ETS in a new State Space form;</li>
<li>ARIMA in a new State Space form;</li>
<li>Regression;</li>
<li>TVP regression;</li>
<li>Combinations of (1), (2) and either (3), or (4);</li>
<li>Automatic selection/combination for ETS;</li>
<li>Automatic orders selection for ARIMA;</li>
<li>Variables selection for regression part;</li>
<li>Normal and non-normal distributions;</li>
<li>Automatic selection of most suitable distribution;</li>
<li>Multiple seasonality;</li>
<li>Occurrence part of the model to handle zeroes in data (intermittent demand);</li>
<li>Modelling scale of distribution (GARCH and beyond);</li>
<li>Handling uncertainty of estimates of parameters.</li>
</ol>
<p>The model and all its features are already implemented in <code>adam()</code> function from <code>smooth</code> package for R (you need v3.1.6 from CRAN for all the features listed above). The function supports many options that allow one experimenting with univariate forecasting, allowing to build complex models, combining elements from the list above. The monograph explaining how models underlying ADAM and how to work with them is <a href="https://openforecast.org/adam/" rel="noopener" target="_blank">available online</a>, and I plan to produce several physical copies of it after refining the text. Furthermore, I have already asked two well-known academics to act as reviewers of the monograph to collect the feedback and improve the monograph, and if you want to act as a reviewer as well, please let me know.</p>
<h3>Examples in R</h3>
<p>Just to give you a flavour of ADAM, I decided to provide a couple of examples on time series <code>AirPassengers</code> (included in <code>datasets</code> package in R). The first one is the ADAM ETS.</p>
<p>Building and selecting the most appropriate ADAM ETS comes to running the following line of code:</p>
<pre class="decode">adamETSAir <- adam(AirPassengers, h=12, holdout=TRUE)</pre>
<p>In this case, ADAM will select the most appropriate ETS model for the data, creating a holdout of the last 12 observations. We can see the details of the model by printing the output:</p>
<pre class="decode">adamETSAir</pre>
<pre>Time elapsed: 0.75 seconds
Model estimated using adam() function: ETS(MAM)
Distribution assumed in the model: Gamma
Loss function type: likelihood; Loss function value: 467.2981
Persistence vector g:
 alpha   beta  gamma 
0.7691 0.0053 0.0000 

Sample size: 132
Number of estimated parameters: 17
Number of degrees of freedom: 115
Information criteria:
      AIC      AICc       BIC      BICc 
 968.5961  973.9646 1017.6038 1030.7102 

Forecast errors:
ME: 9.537; MAE: 20.784; RMSE: 26.106
sCE: 43.598%; Asymmetry: 64.8%; sMAE: 7.918%; sMSE: 0.989%
MASE: 0.863; RMSSE: 0.833; rMAE: 0.273; rRMSE: 0.254</pre>
<p>The output above provides plenty of detail on what was estimated and how. Some of these elements have been discussed in <a href="/en/2016/11/02/smooth-package-for-r-es-function-part-ii-pure-additive-models/">one of my previous posts</a> on <code>es()</code> function. The new thing is the information about the assumed distribution for the response variable. By default, ADAM works with Gamma distribution in case of multiplicative error model. This is done to make model more robust in cases of low volume data, where the Normal distribution might produce negative numbers (see <a href="/en/2021/06/30/isf2021-how-to-make-multiplicative-ets-work-for-you/">my presentation</a> on this issues). In case of high volume data, the Gamma distribution will perform similar to the Normal one. The pure multiplicative ADAM ETS is discussed in <a href="https://openforecast.org/adam/ADAMETSPureMultiplicativeChapter.html" rel="noopener" target="_blank">Chapter 6 of ADAM monograph</a>. If Gamma is not suitable, then the other distribution can be selected via the <code>distribution</code> parameter. There is also an automated distribution selection approach in the function <code>auto.adam()</code>:</p>
<pre class="decode">adamETSAutoAir <- auto.adam(AirPassengers, h=12, holdout=TRUE)
adamETSAutoAir</pre>
<pre>Time elapsed: 3.86 seconds
Model estimated using auto.adam() function: ETS(MAM)
Distribution assumed in the model: Normal
Loss function type: likelihood; Loss function value: 466.0744
Persistence vector g:
 alpha   beta  gamma 
0.8054 0.0000 0.0000 

Sample size: 132
Number of estimated parameters: 17
Number of degrees of freedom: 115
Information criteria:
      AIC      AICc       BIC      BICc 
 966.1487  971.5172 1015.1564 1028.2628 

Forecast errors:
ME: 9.922; MAE: 21.128; RMSE: 26.246
sCE: 45.36%; Asymmetry: 65.4%; sMAE: 8.049%; sMSE: 1%
MASE: 0.877; RMSSE: 0.838; rMAE: 0.278; rRMSE: 0.255</pre>
<p>As we see from the output above, the Normal distribution is more appropriate for the data in terms of AICc than the other ones tried out by the function (by default the list includes Normal, Laplace, S, Generalised Normal, Gamma, Inverse Gaussian and Log Normal distributions, but this can be amended by providing a vector of names via <code>distribution</code> parameter). The selection of ADAM ETS and distributions is discussed in <a href="https://openforecast.org/adam/ADAMSelection.html" rel="noopener" target="_blank">Chapter 15 of the monograph</a>.</p>
<p>Having obtained the model, we can diagnose it using <code>plot.adam()</code> function:</p>
<pre class="decode">par(mfcol=c(3,3))
plot(adamETSAutoAir,which=c(1,4,2,6,7,8,10,11,13))</pre>
<p>The <code>which</code> parameter specifies what type of plots to produce, you can find the list of plots in the documentation for <code>plot.adam()</code>. The code above will result in:<br />
<div id="attachment_2824" style="width: 310px" class="wp-caption aligncenter"><a href="/wp-content/uploads/2022/03/adamETSAirDiagnostics.png"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-2824" src="/wp-content/uploads/2022/03/adamETSAirDiagnostics-300x175.png" alt="Diagnostics plots for ADAM ETS on AirPassengers data" width="300" height="175" class="size-medium wp-image-2824" /></a><p id="caption-attachment-2824" class="wp-caption-text">Diagnostics plots for ADAM ETS on AirPassengers data</p></div>
The diagnostic plots are discussed in the <a href="https://openforecast.org/adam/diagnostics.html" rel="noopener" target="_blank">Chapter 14 of ADAM monograph</a>. The plot above does not show any serious issues with the model.</p>
<p>Just for the comparison, we could also try fitting the most appropriate ADAM ARIMA to the data (this model is discussed in <a href="https://openforecast.org/adam/ADAMARIMA.html" rel="noopener" target="_blank">Chapter 9</a>). The code in this case is slightly more complicated, because we need to switch off ETS part of the model and define the maximum orders of ARIMA to try:</p>
<pre class="decode">adamARIMAAir <- adam(AirPassengers, model="NNN", h=12, holdout=TRUE,
                     orders=list(ar=c(3,2),i=c(2,1),ma=c(3,2),select=TRUE))</pre>
<p>This results in the following <a href="https://openforecast.org/adam/ARIMASelection.html" rel="noopener" target="_blank">automatically selected</a> ARIMA model:</p>
<pre>Time elapsed: 3.54 seconds
Model estimated using auto.adam() function: SARIMA(0,1,1)[1](0,1,1)[12]
Distribution assumed in the model: Normal
Loss function type: likelihood; Loss function value: 491.7117
ARMA parameters of the model:
MA:
 theta1[1] theta1[12] 
   -0.1952    -0.0720 

Sample size: 132
Number of estimated parameters: 16
Number of degrees of freedom: 116
Information criteria:
     AIC     AICc      BIC     BICc 
1015.423 1020.154 1061.548 1073.097 

Forecast errors:
ME: -13.795; MAE: 16.65; RMSE: 21.644
sCE: -63.064%; Asymmetry: -79.4%; sMAE: 6.343%; sMSE: 0.68%
MASE: 0.691; RMSSE: 0.691; rMAE: 0.219; rRMSE: 0.21</pre>
<p>Given that ADAM ETS and ADAM ARIMA are formulated in the same framework, they are directly comparable using information critirea. Comparing AICc of the models <code>adamETSAutoAir</code> and <code>adamARIMAAir</code>, we can conclude that the former is more appropriate to the data than the latter. However, the default ARIMA works with the Normal distribution, which might not be appropriate for the data, so we can revert to the <code>auto.adam()</code> to select the better one:</p>
<pre class="decode">adamAutoARIMAAir <- auto.adam(AirPassengers, model="NNN", h=12, holdout=TRUE,
                              orders=list(ar=c(3,2),i=c(2,1),ma=c(3,2),select=TRUE))</pre>
<p>This will take more computational time, but will result in a different model with a lower AICc (which is still higher than the one in ADAM ETS):</p>
<pre>Time elapsed: 25.46 seconds
Model estimated using auto.adam() function: SARIMA(0,1,1)[1](0,1,1)[12]
Distribution assumed in the model: Log-Normal
Loss function type: likelihood; Loss function value: 472.923
ARMA parameters of the model:
MA:
 theta1[1] theta1[12] 
   -0.2785    -0.5530 

Sample size: 132
Number of estimated parameters: 16
Number of degrees of freedom: 116
Information criteria:
      AIC      AICc       BIC      BICc 
 977.8460  982.5764 1023.9708 1035.5197 

Forecast errors:
ME: -12.968; MAE: 13.971; RMSE: 19.143
sCE: -59.285%; Asymmetry: -91.7%; sMAE: 5.322%; sMSE: 0.532%
MASE: 0.58; RMSSE: 0.611; rMAE: 0.184; rRMSE: 0.186</pre>
<p>Note that although the AICc is higher for ARIMA than for ETS, the former has lower error measures than the latter. So, the higher AICc does not necessarily mean that the model is not good. But if we rely on the information criteria, then we should stick with ADAM ETS and we can then produce the forecasts for the next 12 observations (see <a href="https://openforecast.org/adam/ADAMForecasting.html" rel="noopener" target="_blank">Chapter 18</a>):</p>
<pre class="decode">adamETSAutoAirForecast <- forecast(adamETSAutoAir, h=12, interval="prediction",
                                   level=c(0.9,0.95,0.99))
par(mfcol=c(1,1))
plot(adamETSAutoAirForecast)</pre>
<div id="attachment_2839" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/03/adamETSAirForecast.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-2839" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/03/adamETSAirForecast-300x175.png&amp;nocache=1" alt="Forecast from ADAM ETS" width="300" height="175" class="size-medium wp-image-2839" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/03/adamETSAirForecast-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/03/adamETSAirForecast-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/03/adamETSAirForecast-768x448.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2022/03/adamETSAirForecast.png&amp;nocache=1 1200w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-2839" class="wp-caption-text">Forecast from ADAM ETS</p></div>
Finally, if we want to do a more in-depth analysis of parameters of ADAM, we can also produce the summary, which will create the confidence intervals for the parameters of the model:</p>
<pre class="decode">summary(adamETSAutoAir)</pre>
<pre>Model estimated using auto.adam() function: ETS(MAM)
Response variable: data
Distribution used in the estimation: Normal
Loss function type: likelihood; Loss function value: 466.0744
Coefficients:
            Estimate Std. Error Lower 2.5% Upper 97.5%  
alpha         0.8054     0.0864     0.6343      0.9761 *
beta          0.0000     0.0203     0.0000      0.0401  
gamma         0.0000     0.0382     0.0000      0.0755  
level        96.2372     6.8596    82.6496    109.7919 *
trend         2.0901     0.3955     1.3068      2.8716 *
seasonal_1    0.9145     0.0077     0.9003      0.9372 *
seasonal_2    0.8999     0.0081     0.8857      0.9227 *
seasonal_3    1.0308     0.0094     1.0165      1.0535 *
seasonal_4    0.9885     0.0077     0.9743      1.0112 *
seasonal_5    0.9856     0.0072     0.9713      1.0083 *
seasonal_6    1.1165     0.0093     1.1023      1.1392 *
seasonal_7    1.2340     0.0115     1.2198      1.2568 *
seasonal_8    1.2254     0.0105     1.2112      1.2481 *
seasonal_9    1.0668     0.0094     1.0526      1.0896 *
seasonal_10   0.9256     0.0087     0.9113      0.9483 *
seasonal_11   0.8040     0.0075     0.7898      0.8268 *

Error standard deviation: 0.0367
Sample size: 132
Number of estimated parameters: 17
Number of degrees of freedom: 115
Information criteria:
      AIC      AICc       BIC      BICc 
 966.1487  971.5172 1015.1564 1028.2628 </pre>
<p>Note that the <code>summary()</code> function might complain about the Observed Fisher Information. This is because the covariance matrix of parameters is calculated numerically and sometimes the likelihood is not maximised properly. I have not been able to fully resolve this issue yet, but hopefully will do at some point. The summary above shows, for example, that the smoothing parameters \(\beta\) and \(\gamma\) are not significantly different from zero (on 5% level), while \(\alpha\) is expected to vary between 0.6343 and 0.9761 in 95% of the cases. You can read more about the uncertainty of parameters in ADAM in <a href="https://openforecast.org/adam/ADAMUncertainty.html" rel="noopener" target="_blank">Chapter 16</a> of the monograph.</p>
<p>As for the other features of ADAM, here is a brief guide:</p>
<ul>
<li>If you work with multiple seasonal data, then you might need to specify the seasonality via the <code>lags</code> parameter, for example as <code>lags=c(24,7*24)</code> in case of hourly data. This is discussed in <a href="https://openforecast.org/adam/multiple-frequencies-in-adam.html" rel="noopener" target="_blank">Chapter 12</a>;</li>
<li>If you have intermittent data, then you should read <a href="https://openforecast.org/adam/ADAMIntermittent.html" rel="noopener" target="_blank">Chapter 13</a>, which explains how to work with the <code>occurrence</code> parameter of the function;</li>
<li>Explanatory variables are discussed in <a href="https://openforecast.org/adam/ADAMX.html" rel="noopener" target="_blank">Chapter 10</a> and are handled in the <code>adam()</code> function via the <code>formula</code> parameter;</li>
<li>In the cases of heteroscedasticity (time varying or induced by some explanatory variables), there a scale model (which is discussed in <a href="https://openforecast.org/adam/ADAMscaleModel.html" rel="noopener" target="_blank">Chapter 17</a> and implemented as <code>sm()</code> method for the <code>adam</code> class).</li>
</ul>
<p>You can also experiment with advanced estimators (<a href="https://openforecast.org/adam/ADAMETSEstimation.html" rel="noopener" target="_blank">Chapter 11</a>, including custom loss functions) via the <code>loss</code> parameter and forecast combinations (<a href="https://openforecast.org/adam/ADAMCombinations.html" rel="noopener" target="_blank">Section 15.4</a>).</p>
<p>Long story short, if you are interested in univariate forecasting, then do give ADAM a try - it might have the flexibility you needed for your experiments. If you are worried about its accuracy, check out <a href="/en/2021/02/28/after-the-creation-of-adam-smooth-v3-1-0/">this post</a>, where I compared ADAM with other models.</p>
<p>And, as a friend of mine says, "Happy forecasting!"</p>
<p>Message <a href="https://openforecast.org/2022/04/11/the-first-draft-of-forecasting-and-analytics-with-adam/">The first draft of &#8220;Forecasting and Analytics with ADAM&#8221;</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2022/04/11/the-first-draft-of-forecasting-and-analytics-with-adam/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>After the creation of ADAM: smooth v3.1.0</title>
		<link>https://openforecast.org/2021/02/28/after-the-creation-of-adam-smooth-v3-1-0/</link>
					<comments>https://openforecast.org/2021/02/28/after-the-creation-of-adam-smooth-v3-1-0/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Sun, 28 Feb 2021 18:20:08 +0000</pubDate>
				<category><![CDATA[adam()]]></category>
		<category><![CDATA[Package smooth for R]]></category>
		<category><![CDATA[R]]></category>
		<category><![CDATA[ADAM]]></category>
		<category><![CDATA[ARIMA]]></category>
		<category><![CDATA[ETS]]></category>
		<category><![CDATA[smooth]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=2615</guid>

					<description><![CDATA[<p>Since the previous post on &#8220;The Creation of ADAM&#8220;, I had difficulties finding time to code anything, but I still managed to fix some bugs, implement a couple of features and make changes, important enough to call the next version of package smooth &#8220;3.1.0&#8221;. Here is what&#8217;s new: A new algorithm for ARIMA order selection [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2021/02/28/after-the-creation-of-adam-smooth-v3-1-0/">After the creation of ADAM: smooth v3.1.0</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Since the previous post on &#8220;<a href="/en/2021/01/13/the-creation-of-adam-next-step-in-statistical-forecasting/">The Creation of ADAM</a>&#8220;, I had difficulties finding time to code anything, but I still managed to fix some bugs, implement a couple of features and make changes, important enough to call the next version of package smooth &#8220;3.1.0&#8221;. Here is what&#8217;s new:</p>
<ol>
<li>A new algorithm for ARIMA order selection in ADAM via <code>auto.adam()</code> function. The algorithm is explained in the draft of <a href="https://openforecast.org/adam/ARIMASelection.html">my online textbook</a>. This is a more efficient algorithm than the previous one (which was originally implemented for <code>auto.ssarima()</code>) in terms of computational time and forecast accuracy. We will see how it performs on a dataset in the R experiment below.</li>
<li>We no longer depend on forecast package, and we now use <code>forecast()</code> method from <code>greybox</code> package. Hopefully, this will not lead to conflicts between the packages, but if it does, please, let me know, and I will fix them. The main motivation for this move was the amount of packages <code>forecast</code> relies on, making R download half of CRAN, whenever you need to install <code>forecast</code> for the first time. This is a bit irritating and complicates testing process. Furthermore, Rob Hyndman&#8217;s band of programmers now focuses on tideverts packages, and <code>forecast</code> is not included in that infrastructure, so inevitably it will become obsolete, and we would all need to move away from <code>forecast</code> anyway.</li>
<li><code>ves()</code> and <code>viss()</code> functions have been moved from <code>smooth</code> package to a new one called <a href="https://github.com/config-i1/legion">legion</a>. The package is in development stage and will be released closer to the end of March 2021. It will focus on multivariate models for time series analysis and for forecasting, such as Vector Exponential Smoothing, Vector ARIMA etc.</li>
<li>ADAM now also supports Gamma distribution via the respective parameter in <code>adam()</code> function. There is <a href="https://openforecast.org/adam/ADAMETSMultiplicativeDistributions.html">some information</a> about that in the draft of the online textbook on ADAM.</li>
<li>Finally, <code>msdecompose()</code> function (for multiplies seasonal decomposition) now has a built-in mechanism of missing data interpolation based on polynomials and Fourier series. The same mechanism is now used in <code>adam()</code>.</li>
</ol>
<h3>A small competition in R</h3>
<p>I will not repeat the code from <a href="/en/2021/01/13/the-creation-of-adam-next-step-in-statistical-forecasting/">the previous post</a>, you can copy and paste it in your R script and run to replicate the experiment described there. But I will provide the results of the experiment applied with functions from smooth v3.1.0. Here are the two final tables with <a href="/en/2019/08/25/are-you-sure-youre-precise-measuring-accuracy-of-point-forecasts/">error measures</a>: </p>
<pre><strong>Means</strong>:
               MASE RMSSE Coverage Range  sMIS  Time
ADAM-ETS(ZZZ)  2.415 2.098    0.888 1.398 2.437 0.654
ADAM-ETS(ZXZ)  <strong>2.250 1.961    0.895</strong> 1.225 <strong>2.092</strong> 0.497
<span style="color:#700;">ADAM-ARIMA     2.326 2.007    0.841 0.848 3.101 3.029</span>
<span style="color:grey;">ADAM-ARIMA-old 2.551 2.203    0.862 0.968 3.098 5.990</span>
ETS(ZXZ)       2.279 1.977    0.862 1.372 2.490 1.128
ETSHyndman     2.263 1.970    0.882 1.200 2.258 <strong>0.404</strong>
AutoSSARIMA    2.482 2.134    0.801 <strong>0.780</strong> 3.335 1.700
AutoARIMA      2.303 1.989    0.834 0.805 3.013 1.385

<strong>Medians</strong>:
               MASE RMSSE Range  sMIS  Time
ADAM-ETS(ZZZ)  1.362 1.215 0.671 0.917 0.396
ADAM-ETS(ZXZ)  1.327 1.184 0.675 0.909 0.310
<span style="color:#700;">ADAM-ARIMA     1.324 1.187 0.630 0.917 2.818</span>
<span style="color:grey;">ADAM-ARIMA-old 1.476 1.300 0.769 1.006 3.525</span>
ETS(ZXZ)       1.335 1.198 0.616 0.931 0.551
ETSHyndman     1.323 <strong>1.181</strong> 0.653 0.925 <strong>0.164</strong>
AutoSSARIMA    1.419 1.271 <strong>0.577</strong> 0.988 0.909
AutoARIMA      <strong>1.310</strong> 1.182 0.609 <strong>0.881</strong> 0.322</pre>
<p>In the tables above, the models that perform the best in terms of selected error measures are marked in <strong>boldface</strong>, the new ADAM ARIMA is in <span style="color:#700;">dark red</span>, while the old one is in <span style="color:grey;">grey</span> colour. We can see that while the new ADAM ARIMA does not beat the benchmark models, such as ETS(Z,X,Z) or even <code>auto.arima()</code>, it is doing much better than the previous version of ADAM ARIMA in terms of accuracy and computational time and does not fail as badly. The new algorithm is definitely better than the one implemented in <code>auto.ssarima()</code> function from smooth. When it comes to prediction intervals, the ADAM ARIMA outperforms both <code>auto.arima()</code> from forecast package and <code>auto.ssarima()</code> from smooth in terms of coverage, getting closer to the nominal 95% confidence level.</p>
<p>I am still not totally satisfied with ADAM ARIMA in case of optimised initials (it seems to work well in case of <code>initial="backcasting"</code>) and will continue working in this direction, but at least now it works better than in smooth v3.0.1. I also plan to improve the computational speed of ADAM with factor variables, plan to work on development of classes for predicted explanatory variables and then make all <code>smooth</code> functions agnostic of the classes of data. If you have any suggestions, please file <a href="https://github.com/config-i1/smooth/issues">an issue on github</a>.</p>
<p>Till next time!</p>
<p>Message <a href="https://openforecast.org/2021/02/28/after-the-creation-of-adam-smooth-v3-1-0/">After the creation of ADAM: smooth v3.1.0</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2021/02/28/after-the-creation-of-adam-smooth-v3-1-0/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The creation of ADAM &#8211; next step in statistical forecasting</title>
		<link>https://openforecast.org/2021/01/13/the-creation-of-adam-next-step-in-statistical-forecasting/</link>
					<comments>https://openforecast.org/2021/01/13/the-creation-of-adam-next-step-in-statistical-forecasting/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Wed, 13 Jan 2021 11:24:18 +0000</pubDate>
				<category><![CDATA[adam()]]></category>
		<category><![CDATA[ARIMA]]></category>
		<category><![CDATA[ETS]]></category>
		<category><![CDATA[Package smooth for R]]></category>
		<category><![CDATA[R]]></category>
		<category><![CDATA[Regression]]></category>
		<category><![CDATA[regression]]></category>
		<category><![CDATA[smooth]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=2552</guid>

					<description><![CDATA[<p>Good news everyone! The future of statistical forecasting is finally here :). Have you ever struggled with ETS and needed explanatory variables? Have you ever needed to unite ARIMA and ETS? Have you ever needed to deal with all those zeroes in the data? What about the data with multiple seasonalities? All of this and [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2021/01/13/the-creation-of-adam-next-step-in-statistical-forecasting/">The creation of ADAM &#8211; next step in statistical forecasting</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Good news everyone! The future of statistical forecasting is finally here :). Have you ever struggled with ETS and needed explanatory variables? Have you ever needed to unite ARIMA and ETS? Have you ever needed to deal with all those zeroes in the data? What about the data with multiple seasonalities? All of this and more can now be solved by <code>adam()</code> function from smooth v3.0.1 package for R (<a href="https://cran.r-project.org/package=smooth">on its way to CRAN now</a>). ADAM stands for &#8220;Augmented Dynamic Adaptive Model&#8221; (I will talk about it in the next <a href="https://cmaf-fft.lp151.com/" rel="noopener" target="_blank">CMAF Friday Forecasting Talk</a> on 15th January). Now, what is ADAM? Well, something like this:</p>
<div id="attachment_2557" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2020/12/Touched_by_His_Noodly_Appendage_HD-smooth.jpg&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-2557" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2020/12/Touched_by_His_Noodly_Appendage_HD-smooth-300x142.jpg&amp;nocache=1" alt="ADAM, smooth and His Noodly Appendage Flying Spaghetti Monster" width="300" height="142" class="size-medium wp-image-2557" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2020/12/Touched_by_His_Noodly_Appendage_HD-smooth-300x142.jpg&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2020/12/Touched_by_His_Noodly_Appendage_HD-smooth-768x364.jpg&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2020/12/Touched_by_His_Noodly_Appendage_HD-smooth.jpg&amp;nocache=1 1000w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-2557" class="wp-caption-text">The Creation of ADAM by <a href="http://www.androidarts.com/">Arne Niklas Jansson</a> with my adaptation</p></div>
<p>ADAM is the next step in time series analysis and forecasting. Remember <a href="/en/2016/10/14/smooth-package-for-r-es-i/">exponential smoothing</a> and functions like <code>es()</code> and <code>ets()</code>? Remember ARIMA and functions like <code>arima()</code>, <code>ssarima()</code>, <code>msarima()</code> etc? Remember your favourite <a href="/en/2019/01/07/marketing-analytics-with-greybox/">linear regression function</a>, e.g. <code>lm()</code>, <code>glm()</code> or <code>alm()</code>? Well, now these three models are implemented in a unified framework. Now you can have exponential smoothing with ARIMA elements and explanatory variables in one box: <code>adam()</code>. You can do ETS components and ARIMA orders selection, together with explanatory variables selection in one go. You can estimate ETS / ARIMA / regression using either likelihood of a selected distribution or using conventional losses like MSE, or even using your own custom loss. You can tune parameters of optimiser and experiment with initialisation and estimation of the model. The function can deal with multiple seasonalities and with intermittent data in one place. In fact, there are so many features that it is just easier to list the major of them:</p>
<ol>
<li>ETS;</li>
<li>ARIMA;</li>
<li>Regression;</li>
<li>TVP regression;</li>
<li>Combination of (1), (2) and either (3), or (4);</li>
<li>Automatic selection / combination of states for ETS;</li>
<li>Automatic orders selection for ARIMA;</li>
<li>Variables selection for regression part;</li>
<li>Normal and non-normal distributions;</li>
<li>Automatic selection of most suitable distributions;</li>
<li>Advanced and custom loss functions;</li>
<li>Multiple seasonality;</li>
<li>Occurrence part of the model to handle zeroes in data (intermittent demand);</li>
<li>Model diagnostics using plot() and other methods;</li>
<li>Confidence intervals for parameters of models;</li>
<li>Automatic outliers detection;</li>
<li>Handling missing data;</li>
<li>Fine tuning of persistence vector (smoothing parameters);</li>
<li>Fine tuning of initial values of the state vector (e.g. level / trend / seasonality / ARIMA components / regression parameters);</li>
<li>Two initialisation options (optimal / backcasting);</li>
<li>Provided ARMA parameters;</li>
<li>Fine tuning of optimiser (select algorithm and convergence criteria);</li>
<li>&#8230;</li>
</ol>
<p>All of this is based on the Single Source of Error state space model, which makes ETS, ARIMA and regression directly comparable via information criteria and opens a variety of modelling and forecasting possibilities. In addition, the code is much more efficient than the code of already existing smooth functions, so hopefully this will be a convenient function to use. I do not promise that everything will work 100% efficiently from scratch, because this is a new function, which implies that inevitably there are bugs and there is a room for improvement. But I intent to continue working on it, improving it further, based on the provided feedback (you can submit <a href="https://github.com/config-i1/smooth/issues">an issue on github</a> if you have ideas).</p>
<p>Keep in mind that starting from smooth v3.0.0 I will not be introducing new features in <code>es()</code>, <code>ssarima()</code> and other conventional functions for univariate variables in <code>smooth</code> &#8211; I will only fix bugs in them and possibly optimise some parts of the code, but there will be no innovations in them, given that the main focus from now on will be on <code>adam()</code>. To that extent, I have removed some experimental and not fully developed parameters from those functions (e.g. occurrence, oesmodel, updateX, persistenceX and transitionX).</p>
<p>Now, I realise that ADAM is something completely new and contains just too much information to cover in one post. As a result, I have started the work on an <a href="https://openforecast.org/adam/" rel="noopener" target="_blank">online textbook</a>. This is work in progress, missing some chapters, but it already covers many important elements of ADAM. If you find any mistakes in the text or formulae, please, use the &#8220;Open Review&#8221; functionality in the textbook to give me feedback or send me a message. This will be highly appreciated, because, working on this alone, I am sure that I have made plenty of mistakes and typos.</p>
<h3>Example in R</h3>
<p>Finally, it would be boring just to announce things and leave it like that. So, I&#8217;ve decided to come up with an R experiments on M, M3 and tourism competitions data, similar to how I&#8217;ve <a href="/en/2018/01/01/smooth-functions-in-2017/" rel="noopener" target="_blank">done it in 2017</a>, just to show how the function compares with the other conventional ones, measuring their accuracy and computational time:</p>
<div class="su-spoiler su-spoiler-style-fancy su-spoiler-icon-plus su-spoiler-closed" data-scroll-offset="0" data-anchor-in-url="no"><div class="su-spoiler-title" tabindex="0" role="button"><span class="su-spoiler-icon"></span>Huge chunk of code in R</div><div class="su-spoiler-content su-u-clearfix su-u-trim">
<pre class="decode"># Load the packages. If the packages are not available, install them from CRAN
library(Mcomp)
library(Tcomp)
library(smooth)
library(forecast)

# Load the packages for parallel calculation
# This package is available for Linux and MacOS only
# Comment out this line if you work on Windows
library(doMC)

# Set up the cluster on all cores / threads.
## Note that the code that follows might take around 500Mb per thread,
## so the issue is not in the number of threads, but rather in the RAM availability
## If you do not have enough RAM,
## you might need to reduce the number of threads manually.
## But this should not be greater than the number of threads your processor can do.
registerDoMC(detectCores())

##### Alternatively, if you work on Windows (why?), uncomment and run the following lines
# library(doParallel)
# cl <- detectCores()
# registerDoParallel(cl)
#####

# Create a small but neat function that will return a vector of error measures
errorMeasuresFunction <- function(object, holdout, insample){
    return(c(measures(holdout, object$mean, insample),
             mean(holdout < object$upper &#038; holdout > object$lower),
             mean(object$upper-object$lower)/mean(insample),
             pinball(holdout, object$upper, 0.975)/mean(insample),
             pinball(holdout, object$lower, 0.025)/mean(insample),
             sMIS(holdout, object$lower, object$upper, mean(insample),0.95),
             object$timeElapsed))
}

# Create the list of datasets
datasets <- c(M1,M3,tourism)
datasetLength <- length(datasets)
# Give names to competing forecasting methods
methodsNames <- c("ADAM-ETS(ZZZ)","ADAM-ETS(ZXZ)","ADAM-ARIMA",
                  "ETS(ZXZ)","ETSHyndman","AutoSSARIMA","AutoARIMA");
methodsNumber <- length(methodsNames);
# Run adam on one of time series from the competitions to get names of error measures
test <- adam(datasets[[125]]);
# The array with error measures for each method on each series.
## Here we calculate a lot of error measures, but we will use only few of them
testResults <- array(NA,c(methodsNumber,datasetLength,length(test$accuracy)+6),
                             dimnames=list(methodsNames, NULL,
                                           c(names(test$accuracy),
                                             "Coverage","Range",
                                             "pinballUpper","pinballLower","sMIS",
                                             "Time")));

#### ADAM(ZZZ) ####
j <- 1;
result <- foreach(i=1:datasetLength, .combine="cbind", .packages="smooth") %dopar% {
    startTime <- Sys.time()
    test <- adam(datasets[[i]],"ZZZ");
    testForecast <- forecast(test, h=datasets[[i]]$h, interval="pred");
    testForecast$timeElapsed <- Sys.time() - startTime;
    return(errorMeasuresFunction(testForecast, datasets[[i]]$xx, datasets[[i]]$x));
}
testResults[j,,] <- t(result);

#### ADAM(ZXZ) ####
j <- 2;
result <- foreach(i=1:datasetLength, .combine="cbind", .packages="smooth") %dopar% {
    startTime <- Sys.time()
    test <- adam(datasets[[i]],"ZXZ");
    testForecast <- forecast(test, h=datasets[[i]]$h, interval="pred");
    testForecast$timeElapsed <- Sys.time() - startTime;
    return(errorMeasuresFunction(testForecast, datasets[[i]]$xx, datasets[[i]]$x));
}
testResults[j,,] <- t(result);

#### ADAMARIMA ####
j <- 3;
result <- foreach(i=1:datasetLength, .combine="cbind", .packages="smooth") %dopar% {
    startTime <- Sys.time()
    test <- adam(datasets[[i]], "NNN",
                 order=list(ar=c(3,2),i=c(2,1),ma=c(3,2),select=TRUE));
    testForecast <- forecast(test, h=datasets[[i]]$h, interval="pred");
    testForecast$timeElapsed <- Sys.time() - startTime;
    return(errorMeasuresFunction(testForecast, datasets[[i]]$xx, datasets[[i]]$x));
}
testResults[j,,] <- t(result);

#### ES(ZXZ) ####
j <- 4;
result <- foreach(i=1:datasetLength, .combine="cbind", .packages="smooth") %dopar% {
    startTime <- Sys.time()
    test <- es(datasets[[i]],"ZXZ");
    testForecast <- forecast(test, h=datasets[[i]]$h, interval="parametric");
    testForecast$timeElapsed <- Sys.time() - startTime;
    return(errorMeasuresFunction(testForecast, datasets[[i]]$xx, datasets[[i]]$x));
}
testResults[j,,] <- t(result);

#### ETS from forecast package ####
j <- 5;
result <- foreach(i=1:datasetLength, .combine="cbind", .packages="forecast") %dopar% {
    startTime <- Sys.time()
    test <- ets(datasets[[i]]$x);
    testForecast <- forecast(test, h=datasets[[i]]$h, level=95);
    testForecast$timeElapsed <- Sys.time() - startTime;
    return(errorMeasuresFunction(testForecast, datasets[[i]]$xx, datasets[[i]]$x));
}
testResults[j,,] <- t(result);

#### AUTO SSARIMA ####
j <- 6;
result <- foreach(i=1:datasetLength, .combine="cbind", .packages="smooth") %dopar% {
    startTime <- Sys.time()
    test <- auto.ssarima(datasets[[i]]);
    testForecast <- forecast(test, h=datasets[[i]]$h, interval=TRUE);
    testForecast$timeElapsed <- Sys.time() - startTime;
    return(errorMeasuresFunction(testForecast, datasets[[i]]$xx, datasets[[i]]$x));
}
testResults[j,,] <- t(result);

#### AUTOARIMA ####
j <- 7;
result <- foreach(i=1:datasetLength, .combine="cbind", .packages="forecast") %dopar% {
    startTime <- Sys.time()
    test <- auto.arima(datasets[[i]]$x);
    testForecast <- forecast(test, h=datasets[[i]]$h, level=95);
    testForecast$timeElapsed <- Sys.time() - startTime;
    return(errorMeasuresFunction(testForecast, datasets[[i]]$xx, datasets[[i]]$x));
}
testResults[j,,] <- t(result);

# If you work on Windows, don't forget to shutdown the cluster via the following command:
# stopCluster(cl)</pre>
</div></div>
<p>After running this code, we will get the big array (7x5315x21), which would contain many different error measures for <a href="/en/2019/08/25/are-you-sure-youre-precise-measuring-accuracy-of-point-forecasts/">point forecasts</a> and <a href="/en/2019/10/18/how-confident-are-you-assessing-the-uncertainty-in-forecasting/">prediction intervals</a>. We will not use all of them, but instead will extract MASE and RMSSE for point forecasts and Coverage, Range and sMIS for prediction intervals, together with computational time. Although it might be more informative to look at distributions of those variables, we will calculate mean and median values overall, just to get a feeling about the performance:<br />
<div class="su-spoiler su-spoiler-style-fancy su-spoiler-icon-plus su-spoiler-closed" data-scroll-offset="0" data-anchor-in-url="no"><div class="su-spoiler-title" tabindex="0" role="button"><span class="su-spoiler-icon"></span>A much smaller chunk of code in R</div><div class="su-spoiler-content su-u-clearfix su-u-trim">
<pre class="decode">round(apply(testResults[,,c("MASE","RMSSE","Coverage","Range","sMIS","Time")],
            c(1,3),mean),3)
round(apply(testResults[,,c("MASE","RMSSE","Range","MIS","Time")],
            c(1,3),median),3)</pre>
</div></div>
This will result in the following two tables (boldface shows the best performing functions):</p>
<pre><strong>Means</strong>:
               MASE RMSSE Coverage Range  sMIS  Time
ADAM-ETS(ZZZ) 2.415 2.098    0.888 1.398 2.437 0.654
ADAM-ETS(ZXZ) <strong>2.250 1.961    0.895</strong> 1.225 <strong>2.092</strong> 0.497
ADAM-ARIMA    2.551 2.203    0.862 0.968 3.098 5.990
ETS(ZXZ)      2.279 1.977    0.862 1.372 2.490 1.128
ETSHyndman    2.263 1.970    0.882 1.200 2.258 <strong>0.404</strong>
AutoSSARIMA   2.482 2.134    0.801 <strong>0.780</strong> 3.335 1.700
AutoARIMA     2.303 1.989    0.834 0.805 3.013 1.385

<strong>Medians</strong>:
               MASE RMSSE Range  sMIS  Time
ADAM-ETS(ZZZ) 1.362 1.215 0.671 0.917 0.396
ADAM-ETS(ZXZ) 1.327 1.184 0.675 0.909 0.310
ADAM-ARIMA    1.476 1.300 0.769 1.006 3.525
ETS(ZXZ)      1.335 1.198 0.616 0.931 0.551
ETSHyndman    1.323 <strong>1.181</strong> 0.653 0.925 <strong>0.164</strong>
AutoSSARIMA   1.419 1.271 <strong>0.577</strong> 0.988 0.909
AutoARIMA     <strong>1.310</strong> 1.182 0.609 <strong>0.881</strong> 0.322</pre>
<p>Some things to note from this:</p>
<ul>
<li>ADAM ETS(ZXZ) is the most accurate model in terms of mean MASE and RMSSE, it has the coverage closest to 95% (although none of the models achieved the nominal value because of the fundamental underestimation of uncertainty) and has the lowest sMIS, implying that it did better than the other functions in terms of prediction intervals;</li>
<li>The ETS(ZZZ) did worse than ETS(ZXZ) because the latter considers the multiplicative trend, which sometimes becomes unstable, producing exploding trajectories;</li>
<li>ADAM ARIMA is not performing well yet, because of the implemented order selection algorithm and it was the slowest function of all. I plan to improve it in future releases of the function;</li>
<li>While ADAM ETS(ZXZ) did not beat ETS from forecast package in terms of computational time, it was faster than the other functions;</li>
<li>When it comes to medians, <code>auto.arima()</code>, <code>ets()</code> and <code>auto.ssarima()</code> seem to be doing better than ADAM, but not by a large margin.
</ul>
<p>In order to see if the performance of functions is statistically different, we run <a href="/en/2020/08/17/accuracy-of-forecasting-methods-can-you-tell-the-difference/">the RMCB test</a> for MASE, RMSSE and MIS. Note that RMCB compares the median performance of functions. Here is the R code:<br />
<div class="su-spoiler su-spoiler-style-fancy su-spoiler-icon-plus su-spoiler-closed" data-scroll-offset="0" data-anchor-in-url="no"><div class="su-spoiler-title" tabindex="0" role="button"><span class="su-spoiler-icon"></span>A smaller chunk of code in R for the MCB test</div><div class="su-spoiler-content su-u-clearfix su-u-trim">
<pre class="decode"># Load the package with the function
library(greybox)
# Run it for each separate measure, automatically producing plots
rmcbResultMASE <- rmcb(t(testResults[,,"MASE"]))
rmcbResultRMSSE <- rmcb(t(testResults[,,"RMSSE"]))
rmcbResultsMIS <- rmcb(t(testResults[,,"sMIS"]))</pre>
</div></div>
<p>And here are the figures that we get by running that code</p>
<div id="attachment_2599" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBMASE.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-2599" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBMASE-300x175.png&amp;nocache=1" alt="RMCB test for MASE" width="300" height="175" class="size-medium wp-image-2599" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBMASE-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBMASE-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBMASE-768x448.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBMASE.png&amp;nocache=1 1200w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-2599" class="wp-caption-text">RMCB test for MASE</p></div>
<div id="attachment_2598" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBRMSSE.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-2598" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBRMSSE-300x175.png&amp;nocache=1" alt="RMCB test for RMSSE" width="300" height="175" class="size-medium wp-image-2598" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBRMSSE-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBRMSSE-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBRMSSE-768x448.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBRMSSE.png&amp;nocache=1 1200w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-2598" class="wp-caption-text">RMCB test for RMSSE</p></div>
<p>As we can see from the two figures above, ADAM-ETS(Z,X,Z) performs better than the other functions, although statistically not different than ETS implemented in <code>es()</code> and <code>ets()</code> functions. ADAM-ARIMA is the worst performing function for the moment, as we have already noticed in the previous analysis. The ranking is similar for both MASE and RMSSE.</p>
<p>And here is the sMIS plot:</p>
<div id="attachment_2597" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBsMIS.png&amp;nocache=1"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-2597" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBsMIS-300x175.png&amp;nocache=1" alt="RMCB test for sMIS" width="300" height="175" class="size-medium wp-image-2597" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBsMIS-300x175.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBsMIS-1024x597.png&amp;nocache=1 1024w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBsMIS-768x448.png&amp;nocache=1 768w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2021/01/adamTestsRMCBsMIS.png&amp;nocache=1 1200w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-2597" class="wp-caption-text">RMCB test for sMIS</p></div>
<p>When it comes to sMIS, the leader in terms of medians is <code>auto.arima()</code>, doing quite similar to <code>ets()</code>, but this is mainly because they have lower ranges, incidentally resulting in lower than needed coverage (as seen from the summary performance above). ADAM-ETS does similar to <code>ets()</code> and <code>es()</code> in this aspect (the intervals of the three intersect).</p>
<p>Obviously, we could provide more detailed analysis of performance of functions on different types of data and see, how they compare in each category, but the aim of this post is just to demonstrate how the new function works, I do not have intent to investigate this in detail.</p>
<p>Finally, I will present ADAM with several case studies in <a href="https://cmaf-fft.lp151.com/" rel="noopener" target="_blank">CMAF Friday Forecasting Talk</a> on 15th January. If you are interested to hear more and have some questions, please <a href="https://www.meetup.com/cmaf-friday-forecasting-talks/" rel="noopener" target="_blank">register on MeetUp</a> or <a href="https://www.linkedin.com/events/cmaffft-toinfinityandbeyond-for6751883043834773504/" rel="noopener" target="_blank">via LinkedIn</a> and join us online.</p>
<p>Message <a href="https://openforecast.org/2021/01/13/the-creation-of-adam-next-step-in-statistical-forecasting/">The creation of ADAM &#8211; next step in statistical forecasting</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2021/01/13/the-creation-of-adam-next-step-in-statistical-forecasting/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
