<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Archives time series - Open Forecasting</title>
	<atom:link href="https://openforecast.org/tag/time-series/feed/" rel="self" type="application/rss+xml" />
	<link>https://openforecast.org/tag/time-series/</link>
	<description>How to look into the future</description>
	<lastBuildDate>Mon, 26 Jan 2026 09:29:25 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Forecasting Competitions Datasets in Python</title>
		<link>https://openforecast.org/2026/01/26/forecasting-competitions-datasets-in-python/</link>
					<comments>https://openforecast.org/2026/01/26/forecasting-competitions-datasets-in-python/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Mon, 26 Jan 2026 09:29:25 +0000</pubDate>
				<category><![CDATA[Python]]></category>
		<category><![CDATA[Social media]]></category>
		<category><![CDATA[Competitions]]></category>
		<category><![CDATA[time series]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=3955</guid>

					<description><![CDATA[<p>Here is one small, unexpected piece of news: I now have my first package on PyPI! It’s called fcompdata, and let me tell you a little bit about it. When I test my functions in R, I usually use the M1, M3, and tourism competition datasets because they are diverse enough, containing seasonal, non-seasonal, trended, [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2026/01/26/forecasting-competitions-datasets-in-python/">Forecasting Competitions Datasets in Python</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Here is one small, unexpected piece of news: I now have my first package on PyPI! It’s called <a href="https://pypi.org/project/fcompdata/">fcompdata</a>, and let me tell you a little bit about it.</p>
<p>When I test my functions in R, I usually use the M1, M3, and tourism competition datasets because they are diverse enough, containing seasonal, non-seasonal, trended, and non-trended time series of different frequencies (yearly, quarterly, monthly). The total number of these series is 5,315, which is large enough but not too heavy for my PC. So, when I run something on those datasets, it becomes like a stress test for the forecasting approach, and I can see where it fails and how it can be improved. I consider this type of test a toy experiment — something to do before applying anything to real-world data.</p>
<p>In R, there are the Mcomp and Tcomp packages that contain these datasets, and I like how they are organised. You can do something like this:</p>
<pre class="decode">series <- Mcomp::M3[[2568]]
ourModel <- adam(series$x)
ourForecast <- forecast(model, h=series$h)
ourError <- series$xx - ourForecast$mean</pre>
<p>Each series from the dataset contains all the necessary attributes to run the experiment without trouble. This is easy and straightforward. Plus, I don’t need to download or organise any data — I just use the installed package.</p>
<p>When I started vibe coding in Python, I realised that I missed this functionality. So, with the help of Claude AI, I created a Python script to download the data from the Monash repository and organise it the way I liked. But then I realised two things, which motivated me to package it:</p>
<ol>
<li>I needed to drag this script with me to every project I worked on. It would be much easier to just run "pip install fcompdata" and forget about everything else.</li>
<li>Some series in the Monash repository differ from those in the R package.</li>
</ol>
<p>Wait, what?! Really?</p>
<p>Yes. The difference is tiny — it’s a matter of rounding. For example, series N350 from the M1 competition data (T169 from the quarterly data subset) has three digits in the R package and only two if downloaded from the Monash repository (Zenodo website).</p>
<p>Who cares?! It's just one digit difference, right?</p>
<p>Well, if you want to reproduce results across different languages, this tiny difference might become your nightmare. So, I care (and probably nobody else in the world), and I decided to create a proper Python package. You can now do this in Python and relax:</p>
<pre class="decode">pip install fcompdata

from fcompdata import M1, M3, Tourism
series = M3[2568]</pre>
<p>The "series" object is now an instance of the MCompSeries class that has the same attributes as in R: series.x, series.h, series.xx, etc.</p>
<p>As simple as that!</p>
<p>One more thing: I’ve added support for the M4 competition data, which — when imported — will be downloaded and formatted properly. The dataset is large (100k time series), and I personally don’t like it. I even wrote <a href="https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/">a post about it back in 2020</a>. But if I want the package to be useful to a wider audience, I shouldn’t impose my personal preferences — you should decide for yourselves whether to use it or not.</p>
<p>P.S. Submitting to PyPI gave me a good understanding of the submission process for Python and why it can be such a mess. My package was published just a few seconds after submission — nobody looked at it, nobody ran any tests. CRAN does a variety of checks to ensure you don’t submit garbage. PyPI doesn’t care. So, I’ve gained more respect for CRAN after submitting this package to PyPI.</p>
<p>Message <a href="https://openforecast.org/2026/01/26/forecasting-competitions-datasets-in-python/">Forecasting Competitions Datasets in Python</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2026/01/26/forecasting-competitions-datasets-in-python/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Online Detection of Forecast Model Inadequacies Using Forecast Errors</title>
		<link>https://openforecast.org/2025/06/11/online-detection-of-forecast-model-inadequacies-using-forecast-errors/</link>
					<comments>https://openforecast.org/2025/06/11/online-detection-of-forecast-model-inadequacies-using-forecast-errors/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Wed, 11 Jun 2025 12:04:14 +0000</pubDate>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[changepoint]]></category>
		<category><![CDATA[papers]]></category>
		<category><![CDATA[time series]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=3863</guid>

					<description><![CDATA[<p>There&#8217;s a large and fascinating area in time series analysis called &#8220;changepoint detection&#8221;. I hadn&#8217;t worked in this area before, but thanks to Rebecca Killick and Thomas Grundy, I contributed to the paper &#8220;Online Detection of Forecast Model Inadequacies Using Forecast Errors&#8220;, which has just been published in the Journal of Time Series Analysis. DISCLAIMER: [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2025/06/11/online-detection-of-forecast-model-inadequacies-using-forecast-errors/">Online Detection of Forecast Model Inadequacies Using Forecast Errors</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>There&#8217;s a large and fascinating area in time series analysis called &#8220;changepoint detection&#8221;. I hadn&#8217;t worked in this area before, but thanks to <a href="https://www.linkedin.com/in/rebecca-killick-0427b615a">Rebecca Killick</a> and <a href="https://www.linkedin.com/in/grundy95/">Thomas Grundy</a>, I contributed to the paper &#8220;<a href="https://doi.org/10.1111/jtsa.12843">Online Detection of Forecast Model Inadequacies Using Forecast Errors</a>&#8220;, which has just been published in the Journal of Time Series Analysis.</p>
<p><em>DISCLAIMER: the image in the post is taken from the paper, Figure 6, showing the proportion of GRP A&#038;E admissions, the forecast errors and two detectors.</em></p>
<p>Here&#8217;s a brief summary of what it&#8217;s about:</p>
<p>One of the common issues in forecasting is that there might be some serious changes in the data due to external factors (e.g. changes in consumer preferences). These changes are not always captured by the model, which can lead to reduced accuracy, increased variance, and ultimately to losses. The changepoint detection literature addresses this by trying to automatically identify such structural changes and alert analysts when intervention might be needed. This becomes especially useful when managing large numbers of time series, where visual inspection isn&#8217;t feasible.</p>
<p>However, most existing approaches either work directly on raw data or rely on a model,  which makes their usefulness limited.</p>
<p>Tom Grundy and Rebecca Killick came up with a better idea: analysing forecast errors instead. They kindly invited me to join as a co-author (since I know a thing or two about forecasting). The result is an online changepoint detection mechanism that is more universal and can be applied to classical statistical forecasting models and potentially to machine learning approaches.</p>
<p>The paper is quite technical and includes theoretical derivations, showing that the proposed method substantially reduces detection delay compared to some conventional approaches. We also evaluated its performance with ARIMA and ETS models on simulated data and provided several examples with real time series, demonstrating how it works.</p>
<p>The final version of the paper <a href="https://doi.org/10.1111/jtsa.12843">is available here</a>, while the <a href="https://doi.org/10.48550/arXiv.2502.14173">pre-print is here</a>.</p>
<p>Message <a href="https://openforecast.org/2025/06/11/online-detection-of-forecast-model-inadequacies-using-forecast-errors/">Online Detection of Forecast Model Inadequacies Using Forecast Errors</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2025/06/11/online-detection-of-forecast-model-inadequacies-using-forecast-errors/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Is there such thing as &#8220;Time series forecasting&#8221;?</title>
		<link>https://openforecast.org/2024/10/15/is-there-such-thing-as-time-series-forecasting/</link>
					<comments>https://openforecast.org/2024/10/15/is-there-such-thing-as-time-series-forecasting/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Tue, 15 Oct 2024 17:27:07 +0000</pubDate>
				<category><![CDATA[Social media]]></category>
		<category><![CDATA[Theory of forecasting]]></category>
		<category><![CDATA[theory]]></category>
		<category><![CDATA[time series]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=3718</guid>

					<description><![CDATA[<p>Is there such thing as &#8220;Time series forecasting&#8221;? I personally don&#8217;t like this term and think that we should use a different one. Which one? Come with me in this post to find out. I understand why people use the term &#8220;Time series forecasting&#8221; &#8211; they want to show the type of data they work [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2024/10/15/is-there-such-thing-as-time-series-forecasting/">Is there such thing as &#8220;Time series forecasting&#8221;?</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Is there such thing as &#8220;Time series forecasting&#8221;? I personally don&#8217;t like this term and think that we should use a different one. Which one? Come with me in this post to find out.</p>
<p>I understand why people use the term &#8220;Time series forecasting&#8221; &#8211; they want to show the type of data they work with and explain what they are doing. But there is no point in forecasting outside of time series. According to one of the definitions (I previously mention it <a href="/adam/forecastingPlanningAnalytics.html">here</a> and <a href="/2024/05/01/what-is-forecasting/">here</a>), &#8220;<strong>forecast</strong> is a scientifically justified assertion about possible states of an object in the future&#8221;, while forecasting is just a process of producing forecasts. So, the time is already embedded in the definition, and there is no need to add &#8220;time series&#8221; to it.</p>
<p>Furthermore, you cannot do &#8220;cross-sectional forecasting&#8221;. It wouldn&#8217;t be forecasting per se, but rather scenarios generation. For example, you can say that based on the collected cross-sectional data across several shops in our chain on Monday, the increase of price of a product should cause a decline in sales on average by some amount. You can even say what sales to expect if the price and other variables were somehow defined. But you cannot say what to expect in next week based on this data, because the cross-sectional data itself does not have dynamic element. It is good for scenario planning, but useless in forecasting.</p>
<p>Furthermore, when you say &#8220;time series forecasting&#8221;, you imply a wide area without any specificity. But there are many areas, where you can do forecasting which differ substantially from one to another. Are you doing &#8220;demand forecasting&#8221;, &#8220;revenue forecasting&#8221;, &#8220;price forecasting&#8221;, &#8220;volatility forecasting&#8221; or something else? While some approaches can be applied to many of these areas, they still have their specificity and own set of instruments and rules. So, you should always keep the specific domain in mind, otherwise you might do something unreasonable. For example, GAMLSS works very well in energy demand forecasting, but it does not necessarily perform as well in supply chain forecasting, where you often have short history and lots of zeroes.</p>
<p>But most importantly, forecasting should not be done for the sake of itself (see my <a href="/2020/03/23/forecasting-for-the-sake-of-forecasting/">old post from COVID times on this</a>). If you &#8220;forecast a time series&#8221;, then you just do an exercise without a specific aim. Forecasting should inform decisions. Yes, you can show how cool you are, but is there anything beside that?</p>
<p>So, when I see statements like &#8220;This and that guy will talk in our webinar about time series forecasting&#8221;, I feel that the person has a poor understanding of forecasting itself and does not know what to talk about, because there is no specificity in such talk. It&#8217;s like presenting on the topic of &#8220;frequentist statistics&#8221; &#8211; too broad and too general.</p>
<p>I feel that this post might provoke some debate, so feel free to express yourselves in the comments! :)</p>
<p>Message <a href="https://openforecast.org/2024/10/15/is-there-such-thing-as-time-series-forecasting/">Is there such thing as &#8220;Time series forecasting&#8221;?</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2024/10/15/is-there-such-thing-as-time-series-forecasting/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Seasonal or not?</title>
		<link>https://openforecast.org/2024/05/15/seasonal-or-not/</link>
					<comments>https://openforecast.org/2024/05/15/seasonal-or-not/#respond</comments>
		
		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Wed, 15 May 2024 13:00:38 +0000</pubDate>
				<category><![CDATA[Social media]]></category>
		<category><![CDATA[Theory of forecasting]]></category>
		<category><![CDATA[Seasonality]]></category>
		<category><![CDATA[time series]]></category>
		<guid isPermaLink="false">https://openforecast.org/?p=3575</guid>

					<description><![CDATA[<p>Not every pattern that appears seasonal is genuinely seasonal. This means you don&#8217;t always require a seasonal model when you see repetitive patterns with fixed periodicity. How come? First things first, in forecasting, the term &#8220;seasonality&#8221; refers to any natural pattern repeating with some periodicity. For example, if you work in a hospital with A&#038;E [&#8230;]</p>
<p>Message <a href="https://openforecast.org/2024/05/15/seasonal-or-not/">Seasonal or not?</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Not every pattern that appears seasonal is genuinely seasonal. This means you don&#8217;t always require a seasonal model when you see repetitive patterns with fixed periodicity. How come?</p>
<p>First things first, in forecasting, the term &#8220;seasonality&#8221; refers to any natural pattern repeating with some periodicity. For example, if you work in a hospital with A&#038;E attendance, you know that every Monday has higher demand than other days, while weekends tend to have lower demand. This is a well known phenomenon: <a href="https://digital.nhs.uk/data-and-information/publications/statistical/hospital-accident--emergency-activity/2019-20/time-of-day">people have fun over the weekend and then go to hospital first thing in the morning of the work week</a>, so if you don&#8217;t want to get stuck in the hospital, don&#8217;t injure yourselves over the weekend!</p>
<p>Anyway, back to the main topic of this post!</p>
<p>Consider the following example:</p>
<div id="attachment_3576" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/05/2024-05-14-Seasonality.png&amp;nocache=1"><img decoding="async" aria-describedby="caption-attachment-3576" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/05/2024-05-14-Seasonality-300x161.png&amp;nocache=1" alt="Seemingly seasonal time series" width="300" height="161" class="size-medium wp-image-3576" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/05/2024-05-14-Seasonality-300x161.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/05/2024-05-14-Seasonality.png&amp;nocache=1 640w" sizes="(max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-3576" class="wp-caption-text">Seemingly seasonal time series</p></div>
<p>This is a daily data. Is it seasonal? Without context, you might assume so: there&#8217;s a midweek peak followed by a decline, repeating weekly, so it must be seasonal, right? But what if I told you that this data is daily LinkedIn impressions of my posts? The peaks coincide with posts releases, and it is hard to tell from the image, but I released first three posts on Thursdays and then switched to Wednesdays, shifting peaks one day forward. So, this is not a seasonal data, but instead it is a classical life cycle (which can be described, for example, by the <a href="https://doi.org/10.1287/mnsc.15.5.215">Bass model</a>), repeating every week. It would be seasonal if the impressions happened naturally without me releasing anything. For example, number of visitors of my website has natural seasonality, because they happen without my interventions:</p>
<div id="attachment_3577" style="width: 310px" class="wp-caption aligncenter"><a href="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/05/2024-05-14-Seasonality-website.png&amp;nocache=1"><img decoding="async" aria-describedby="caption-attachment-3577" src="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/05/2024-05-14-Seasonality-website-300x130.png&amp;nocache=1" alt="The series with natural seasonality" width="300" height="130" class="size-medium wp-image-3577" srcset="https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/05/2024-05-14-Seasonality-website-300x130.png&amp;nocache=1 300w, https://openforecast.org/wp-content/webpc-passthru.php?src=https://openforecast.org/wp-content/uploads/2024/05/2024-05-14-Seasonality-website.png&amp;nocache=1 734w" sizes="(max-width: 300px) 100vw, 300px" /></a><p id="caption-attachment-3577" class="wp-caption-text">The series with natural seasonality</p></div>
<p>Bringing this to a business context, I have encountered several times the &#8220;spurious seasonality&#8221;. For example, one company, working with weekly data, used a seasonal model with periodicity of 4, because they noticed that at the end of each month, people get their salary and spend it, increasing the sales of the company. However this is not true seasonality, but rather a calendar event that happens seemingly periodically, on a specific day of month (not necessarily the same one).</p>
<p>Why is this important?</p>
<p>Relying on a seasonal model (like seasonal ETS or ARIMA) in such cases poses risks. If the periodicity shifts (e.g., salaries received on a different week), the model will produce misaligned forecasts (e.g., predicting an earlier peak). To address this, you should model such events with explanatory variables instead of seasonal indices. This way you will be able to control the timing of the event in your model and adjust it if needed.</p>
<p>So, next time you see a pattern that looks seasonal, think whether it happens naturally, or whether you are dealing with the spurious seasonality. Remember, the context is always important!</p>
<p>Message <a href="https://openforecast.org/2024/05/15/seasonal-or-not/">Seasonal or not?</a> first appeared on <a href="https://openforecast.org">Open Forecasting</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://openforecast.org/2024/05/15/seasonal-or-not/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
