<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Archives Python - Open Forecasting</title>
	<atom:link href="https://openforecast.org/tag/python/feed/" rel="self" type="application/rss+xml" />
	<link>https://openforecast.org/tag/python/</link>
	<description>How to look into the future</description>
	<lastBuildDate>Sun, 03 May 2026 10:18:09 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>smooth in python: ETS with explanatory variables</title>
		<link>https://openforecast.org/2026/05/05/smooth-in-python-ets-with-explanatory-variables/</link>
					<comments>https://openforecast.org/2026/05/05/smooth-in-python-ets-with-explanatory-variables/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Tue, 05 May 2026 08:03:37 +0000</pubDate>
				<category><![CDATA[ETS]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[smooth for Python]]></category>
		<category><![CDATA[Social media]]></category>
		<category><![CDATA[extrapolation methods]]></category>
		<category><![CDATA[smooth]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=4128</guid>

					<description><![CDATA[<p>We continue our series of posts on the functions from the smooth package for Python/R. Today we will see how to enhance your exponential smoothing with explanatory variables. What? Yes, you heard me! Let&#8217;s dive in! We all know that in real life sales don&#8217;t just evolve over time on their own. Any univariate model, [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2026/05/05/smooth-in-python-ets-with-explanatory-variables/">smooth in python: ETS with explanatory variables</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>We continue our series of posts on the functions from the smooth package for Python/R. Today we will see how to enhance your exponential smoothing with explanatory variables. What? Yes, you heard me! Let&#8217;s dive in!</p>
<p>We all know that in real life sales don&#8217;t just evolve over time on their own. Any univariate model, such as ARIMA or ETS is just a way to approximate a complex reality. In practice, there are many factors affecting the demand for your product. What would happen if the price on your product increases? What if you run a promotion (e.g. &#8220;Buy One, Get One Free&#8221;)? Your competitor&#8217;s strategy impacts the demand for your product as well&#8230; There&#8217;s lots of different factors, and some of them can be quite useful in demand forecasting. But can we join the dynamic univariate models with regression?</p>
<p>Yes, we can! Although ETS is thought as a pure univariate model, it is easy to extend to include explanatory variables. There are several great papers showing how it works (e.g. <a href="https://doi.org/10.1016/j.ijpe.2015.09.011">Kourentzes &#038; Petropoulos, 2016</a>), and in fact the <code>es()</code> function from the smooth package for R was used as a benchmark in <a href="https://doi.org/10.1016/j.ijforecast.2021.11.013">the M5 competition</a>.</p>
<p>So, consider a situation where you have weekly sales of a product with some recorded promotions (encoded as dummy variables). We will use a time series from the fcompdata package for Python. The first image shows how the series looks, the vertical lines show when promotions happen. The series itself seems to be seasonal, roughly repeating peaks and troughs every 52 observations (every year). Also, we see that there are two types of promotions, and when they happen sales tend to increase. So, including them should improve the model fit, and if the company decides to run promotions again, the model will forecast demand better. I will start by fitting the ETS(M,N,M) to the data:</p>
<pre class="decode">from smooth import ES
from fcompdata import PromoData

y = PromoData.y

model = ES(model="MNM", lags=52, holdout=True, h=13)
model.fit(y)
model.predict(h=13)
model.plot(7)</pre>
<p><strong>NOTE</strong>: PromoData has a specific structure with several attributes. PromoData.x contains the in-sample data, PromoData.xx has the holdout &#8211; this is consistent with the Mcomp package for R. The new features in python are:</p>
<ul>
<li>PromoData.y &#8211; concatenated training and test sets,</li>
<li>PromoData.xregx &#8211; matrix of explanatory variables for the training set,</li>
<li>PromoData.xregxx &#8211; matrix of explanatory variables for the test set,</li>
<li>PromoData.xreg &#8211; the full (concatenated) matrix of explanatory variables.</li>
</ul>
<p>The following image shows the model fit and the point forecasts from the ETS(M,N,M):</p>
<div id="attachment_4132" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2026/05/2026-04-17-smooth-posts-03-ETSX-02.png&amp;nocache=1"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-4132" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2026/05/2026-04-17-smooth-posts-03-ETSX-02-300x214.png&amp;nocache=1" alt="ETS(M,N,M) fit and forecast for the promotional data example" width="300" height="214" class="size-medium wp-image-4132" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2026/05/2026-04-17-smooth-posts-03-ETSX-02-300x214.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2026/05/2026-04-17-smooth-posts-03-ETSX-02.png&amp;nocache=1 700w" sizes="(max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-4132" class="wp-caption-text">ETS(M,N,M) fit and forecast for the promotional data example</p></div>
<p>As expected, because the model does not take promotions into account, it fits the data as best as it can and produces forecasts that are oblivious of the potential external effects on sales. We can improve it by including the promotional dummies:</p>
<pre class="decode">X_train = PromoData.xreg
X_test =  PromoData.xregxx

model = ES(model="MNM", lags=52, holdout=True, h=13)
model.fit(y, X_train)
model.predict(h=13, X=X_test)
model.plot(7)</pre>
<div id="attachment_4131" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2026/05/2026-04-17-smooth-posts-03-ETSX-03.png&amp;nocache=1"><img decoding="async" aria-describedby="caption-attachment-4131" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2026/05/2026-04-17-smooth-posts-03-ETSX-03-300x214.png&amp;nocache=1" alt="ETS(M,N,M) with explanatory variables" width="300" height="214" class="size-medium wp-image-4131" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2026/05/2026-04-17-smooth-posts-03-ETSX-03-300x214.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2026/05/2026-04-17-smooth-posts-03-ETSX-03.png&amp;nocache=1 700w" sizes="(max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-4131" class="wp-caption-text">ETS(M,N,M) with explanatory variables</p></div>
<p>The image above shows the fit and the point forecasts from the ETSX(M,N,M) model that now takes the promotions into account. This is quite an improvement in comparison with the previous one. Furthermore, if we can control when to have promotions and what types of promotions to run, we can change the values in the `X_test` matrix and see what demand to expected in that situation. So, this gives an analyst a tool for a more advanced sensitivity analysis.</p>
<p>Read more about the ETSX <a href="https://openforecast.org/adam/ADAMX.html">here</a>.<br />
Install smooth: <code>pip install smooth</code><br />
<a href="https://github.com/config-i1/smooth/wiki/Explanatory-Variables">ETSX wiki on github</a>.</p>
<p>Message <a href="https://openforecast.org/2026/05/05/smooth-in-python-ets-with-explanatory-variables/">smooth in python: ETS with explanatory variables</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2026/05/05/smooth-in-python-ets-with-explanatory-variables/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>smooth in python: ETS forecast combination</title>
		<link>https://openforecast.org/2026/04/27/smooth-in-python-ets-forecast-combination/</link>
					<comments>https://openforecast.org/2026/04/27/smooth-in-python-ets-forecast-combination/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Mon, 27 Apr 2026 08:01:30 +0000</pubDate>
				<category><![CDATA[ETS]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[smooth for Python]]></category>
		<category><![CDATA[Univariate models]]></category>
		<category><![CDATA[ADAM]]></category>
		<category><![CDATA[extrapolation methods]]></category>
		<category><![CDATA[smooth]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=4121</guid>

					<description><![CDATA[<p>Last time we saw how to do automated model selection using the ES function from the smooth package. Now I want to show how to produce combined forecasts from ETS. Why bother? There is a vast body of literature on forecast combinations (read this great review). The main idea is that you should not put [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2026/04/27/smooth-in-python-ets-forecast-combination/">smooth in python: ETS forecast combination</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Last time we saw how to do automated model selection using the ES function from the smooth package. Now I want to show how to produce combined forecasts from ETS.</p>
<p>Why bother?</p>
<p>There is a vast body of literature on forecast combinations (read <a href="https://doi.org/10.1016/j.ijforecast.2022.11.005">this great review</a>). The main idea is that you should not put all your eggs in one basket — the safer strategy is to combine forecasts from different models instead of selecting just one. Yes, it is more computationally expensive, but the trade-off is higher accuracy on average.</p>
<p>For ETS, a great solution was proposed by <a href="https://doi.org/10.1016/j.ijforecast.2010.04.006">Stephan Kolassa in his 2011 paper</a>: extract AIC values, calculate AIC weights (giving the highest weight to the best-performing model and lower ones to the rest), then combine the forecasts. The resulting forecasts tend to be more robust, because in practice it might be hard to tell the difference between, for example, ETS(M,A,M) and ETS(M,Md,M). So why choose one when you can have all? I implemented this mechanism in the smooth package for R years ago, and now it is also available in Python.</p>
<p>Here is how it works on an example using an M3 time series. I picked this specific one because it is seasonal, but the trend is not very well pronounced. The series is shown in the first image.</p>
<pre class="decode">from smooth import ES
from fcompdata import M3

series = M3[1687]
y = series.y
freq = series.period

# Fit ETS models, combine forecasts
model = ES(model="CXC", lags=series.period, h=18, holdout=True)
model.fit(y)
model.predict(h=18)</pre>
<p>The code above tells ES to fit all ETS models with additive and no trend (&#8220;X&#8221; in the middle), calculate AIC weights, produce forecasts from each one of them, and then combine them. The resulting point forecast is the weighted combination of the individual forecasts. If a prediction interval is required, the specific quantiles are combined directly (see the paper by <a href="https://doi.org/10.1287/mnsc.1120.1667">Lichtendahl et al., 2013</a>). This is inevitably slower than the default model selection mechanism, but is a safer approach. The point forecast and the prediction interval (grey lines) are shown in the attached image.</p>
<p>Note that the user can regulate the pool of combined models via the &#8220;model&#8221; parameter of the function. <a href="https://github.com/config-i1/smooth/wiki/ADAM#ets-models">This wiki explains all the accepted options</a>.</p>
<p>So why not go ahead and try it yourself, and see how it works for your data?</p>
<p>🔗 Install smooth: pip install smooth<br />
📖 More on <a href="https://openforecast.org/adam/ADAMCombinations.html">forecasts combination in ADAM</a>.</p>
<p>Message <a href="https://openforecast.org/2026/04/27/smooth-in-python-ets-forecast-combination/">smooth in python: ETS forecast combination</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2026/04/27/smooth-in-python-ets-forecast-combination/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>smooth in python: ETS with model selection</title>
		<link>https://openforecast.org/2026/04/22/smooth-in-python-ets-with-model-selection/</link>
					<comments>https://openforecast.org/2026/04/22/smooth-in-python-ets-with-model-selection/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Wed, 22 Apr 2026 00:06:43 +0000</pubDate>
				<category><![CDATA[ETS]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[smooth for Python]]></category>
		<category><![CDATA[Social media]]></category>
		<category><![CDATA[ADAM]]></category>
		<category><![CDATA[extrapolation methods]]></category>
		<category><![CDATA[smooth]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=4111</guid>

					<description><![CDATA[<p>As some of you have heard, the smooth package is now on PyPI. So, I&#8217;ve decided to write a series of posts showcasing how some of its functions work. We start with the basics, ETS. ETS stands for the &#8220;Error-Trend-Seasonal&#8221; model or ExponenTial Smoothing. It is a statistical model that relies on time series decomposition [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2026/04/22/smooth-in-python-ets-with-model-selection/">smooth in python: ETS with model selection</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As some of you have heard, the smooth package is now on PyPI. So, I&#8217;ve decided to write a series of posts showcasing how some of its functions work. We start with the basics, ETS.</p>
<p>ETS stands for the &#8220;Error-Trend-Seasonal&#8221; model or ExponenTial Smoothing. It is a statistical model that relies on time series decomposition and updates the unobserved states (level/trend/seasonal) based on the mistakes it makes. In a way, you can call it an adaptive model that changes its forecast based on the most recent available information. It is relatively simple to explain and work with, and it has performed well in a variety of competitions (M3, M4, M5, for example).</p>
<p>The smooth package implements an advanced form of ETS in the ADAM and a more basic one in the ES classes. In fact, ES is just a wrapper of ADAM, it is the conventional model, with just some tuning. Both support all 30 ETS models, have automated model selection and forecast combination, allow producing point forecasts and a variety of prediction intervals types. In fact, if you want a straightforward robust implementation of ETS, give ES a try.</p>
<p>Here&#8217;s how to use it in Python:</p>
<pre class="decode">from smooth import ES
from fcompdata import M3

# Pick a series from the M3 competition for demonstration
series = M3[2568]
y = series.x
freq = series.period

# Fit ES with the automatic model selection
model = ES(lags=freq, h=18, holdout=True)
model.fit(y)
print(model)</pre>
<p>Running this produces output similar to this:</p>
<pre>Time elapsed: 0.4 seconds
Model estimated using ES() function: ETS(MAM)
With backcasting initialisation
Distribution assumed in the model: Normal
Loss function type: likelihood; Loss function value: 724.8524
Persistence vector g:
 alpha   beta  gamma
0.0065 0.0000 0.0000
Sample size: 98
Number of estimated parameters: 4
Number of degrees of freedom: 94
Information criteria:
      AIC      AICc       BIC      BICc
1457.7047 1458.1348 1468.0446 1469.0306

Forecast errors:
ME: -580.9985; MAE: 604.0204; RMSE: 710.5457
sCE: -149.9347%; Asymmetry: -2.5%; sMAE: 8.6598%; sMSE: 1.0378%
MASE: 0.2653; RMSSE: 0.2452; rMAE: 0.2555; rRMSE: 0.2163</pre>
<p>A few things worth noting from the output:</p>
<ul>
<li>ES automatically selected ETS(MAM) based on the AICc value &#8211; a multiplicative error, additive trend, multiplicative seasonality model &#8211; as the best fit</li>
<li>It used backcasting for the model initialisation (default), which speeds up the process and requires fewer parameters to estimate</li>
<li>It kept the last 18 observation for the holdout, produced autoforecasts for it and calculated several forecast errors. This is handy if you want to directly compare different smooth models on a time series.</li>
</ul>
<p>But why are we here? We want to forecast! So, here it is:</p>
<pre class="decode">
model.predict(h=18, interval="prediction")
model.plot(7)
</pre>
<p>This should produce an image similar to the one attached to the post. As simple as that.</p>
<p>Now it&#8217;s your turn! :)</p>
<p>🔗 Install smooth: pip install smooth<br />
📖 <a href="https://github.com/config-i1/smooth/wiki">smooth wiki</a></p>
<p>Message <a href="https://openforecast.org/2026/04/22/smooth-in-python-ets-with-model-selection/">smooth in python: ETS with model selection</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2026/04/22/smooth-in-python-ets-with-model-selection/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Forecasting Competitions Datasets in Python</title>
		<link>https://openforecast.org/2026/01/26/forecasting-competitions-datasets-in-python/</link>
					<comments>https://openforecast.org/2026/01/26/forecasting-competitions-datasets-in-python/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Mon, 26 Jan 2026 09:29:25 +0000</pubDate>
				<category><![CDATA[Python]]></category>
		<category><![CDATA[Social media]]></category>
		<category><![CDATA[Competitions]]></category>
		<category><![CDATA[time series]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=3955</guid>

					<description><![CDATA[<p>Here is one small, unexpected piece of news: I now have my first package on PyPI! It’s called fcompdata, and let me tell you a little bit about it. When I test my functions in R, I usually use the M1, M3, and tourism competition datasets because they are diverse enough, containing seasonal, non-seasonal, trended, [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2026/01/26/forecasting-competitions-datasets-in-python/">Forecasting Competitions Datasets in Python</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Here is one small, unexpected piece of news: I now have my first package on PyPI! It’s called <a href="https://pypi.org/project/fcompdata/">fcompdata</a>, and let me tell you a little bit about it.</p>
<p>When I test my functions in R, I usually use the M1, M3, and tourism competition datasets because they are diverse enough, containing seasonal, non-seasonal, trended, and non-trended time series of different frequencies (yearly, quarterly, monthly). The total number of these series is 5,315, which is large enough but not too heavy for my PC. So, when I run something on those datasets, it becomes like a stress test for the forecasting approach, and I can see where it fails and how it can be improved. I consider this type of test a toy experiment — something to do before applying anything to real-world data.</p>
<p>In R, there are the Mcomp and Tcomp packages that contain these datasets, and I like how they are organised. You can do something like this:</p>
<pre class="decode">series <- Mcomp::M3[[2568]]
ourModel <- adam(series$x)
ourForecast <- forecast(model, h=series$h)
ourError <- series$xx - ourForecast$mean</pre>
<p>Each series from the dataset contains all the necessary attributes to run the experiment without trouble. This is easy and straightforward. Plus, I don’t need to download or organise any data — I just use the installed package.</p>
<p>When I started vibe coding in Python, I realised that I missed this functionality. So, with the help of Claude AI, I created a Python script to download the data from the Monash repository and organise it the way I liked. But then I realised two things, which motivated me to package it:</p>
<ol>
<li>I needed to drag this script with me to every project I worked on. It would be much easier to just run "pip install fcompdata" and forget about everything else.</li>
<li>Some series in the Monash repository differ from those in the R package.</li>
</ol>
<p>Wait, what?! Really?</p>
<p>Yes. The difference is tiny — it’s a matter of rounding. For example, series N350 from the M1 competition data (T169 from the quarterly data subset) has three digits in the R package and only two if downloaded from the Monash repository (Zenodo website).</p>
<p>Who cares?! It's just one digit difference, right?</p>
<p>Well, if you want to reproduce results across different languages, this tiny difference might become your nightmare. So, I care (and probably nobody else in the world), and I decided to create a proper Python package. You can now do this in Python and relax:</p>
<pre class="decode">pip install fcompdata

from fcompdata import M1, M3, Tourism
series = M3[2568]</pre>
<p>The "series" object is now an instance of the MCompSeries class that has the same attributes as in R: series.x, series.h, series.xx, etc.</p>
<p>As simple as that!</p>
<p>One more thing: I’ve added support for the M4 competition data, which — when imported — will be downloaded and formatted properly. The dataset is large (100k time series), and I personally don’t like it. I even wrote <a href="https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/">a post about it back in 2020</a>. But if I want the package to be useful to a wider audience, I shouldn’t impose my personal preferences — you should decide for yourselves whether to use it or not.</p>
<p>P.S. Submitting to PyPI gave me a good understanding of the submission process for Python and why it can be such a mess. My package was published just a few seconds after submission — nobody looked at it, nobody ran any tests. CRAN does a variety of checks to ensure you don’t submit garbage. PyPI doesn’t care. So, I’ve gained more respect for CRAN after submitting this package to PyPI.</p>
<p>Message <a href="https://openforecast.org/2026/01/26/forecasting-competitions-datasets-in-python/">Forecasting Competitions Datasets in Python</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2026/01/26/forecasting-competitions-datasets-in-python/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
