<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>
	Comments on: M-competitions, from M4 to M5: reservations and expectations	</title>
	<atom:link href="https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/feed/" rel="self" type="application/rss+xml" />
	<link>https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/</link>
	<description>How to look into the future</description>
	<lastBuildDate>Fri, 15 Mar 2024 12:43:24 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>
		By: Fotios Petropoulos		</title>
		<link>https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/#comment-144</link>

		<dc:creator><![CDATA[Fotios Petropoulos]]></dc:creator>
		<pubDate>Thu, 25 Jun 2020 14:52:11 +0000</pubDate>
		<guid isPermaLink="false">https://openforecast.org/?p=2328#comment-144</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/#comment-143&quot;&gt;Ivan Svetunkov&lt;/a&gt;.

From the conclusions article in the M4 competition IJF special issue:
&quot;The forecasting spring began with the M4 Competition, where a complex hybrid approach combining statistical and ML elements came first, providing a 9.4% improvement in its sMAPE relative to that of the Comb benchmark.&quot;
Makridakis and Petropoulos (2020)]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/#comment-143">Ivan Svetunkov</a>.</p>
<p>From the conclusions article in the M4 competition IJF special issue:<br />
&#8220;The forecasting spring began with the M4 Competition, where a complex hybrid approach combining statistical and ML elements came first, providing a 9.4% improvement in its sMAPE relative to that of the Comb benchmark.&#8221;<br />
Makridakis and Petropoulos (2020)</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Ivan Svetunkov		</title>
		<link>https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/#comment-143</link>

		<dc:creator><![CDATA[Ivan Svetunkov]]></dc:creator>
		<pubDate>Mon, 02 Mar 2020 10:22:56 +0000</pubDate>
		<guid isPermaLink="false">https://openforecast.org/?p=2328#comment-143</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/#comment-142&quot;&gt;Vangelis Spiliotis&lt;/a&gt;.

Hi Vangelis,

Thank you for you comment.

Indeed, we do agree to disagree. We don&#039;t need to have the same opinion on the topic, and I&#039;m just expressing mine based on the observations I made. I know that you have a reasonable view on the problem, although I&#039;m not sure that Spyros shares your opinion on the topic.

Anyway, good luck with M5!]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/#comment-142">Vangelis Spiliotis</a>.</p>
<p>Hi Vangelis,</p>
<p>Thank you for you comment.</p>
<p>Indeed, we do agree to disagree. We don&#8217;t need to have the same opinion on the topic, and I&#8217;m just expressing mine based on the observations I made. I know that you have a reasonable view on the problem, although I&#8217;m not sure that Spyros shares your opinion on the topic.</p>
<p>Anyway, good luck with M5!</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Vangelis Spiliotis		</title>
		<link>https://openforecast.org/2020/03/01/m-competitions-from-m4-to-m5-reservations-and-expectations/#comment-142</link>

		<dc:creator><![CDATA[Vangelis Spiliotis]]></dc:creator>
		<pubDate>Mon, 02 Mar 2020 09:34:28 +0000</pubDate>
		<guid isPermaLink="false">https://openforecast.org/?p=2328#comment-142</guid>

					<description><![CDATA[Hi Ivan,

Glad you like the set-up of M5. We also think it is reasonable and that it represents reality, at least as much as a competition allows in order for it to be relatively simple and comprehensible. I think M5 has a lot of potential and I am looking forward to reviewing its results.

Regarding M4, I think we&#039;ve already &quot;agreed we disagree&quot; in a previous post of yours about both the accuracy measures used in the competition and the way its results were interpreted. 

I won&#039;t comment on the M4 measures here as I&#039;ve already done that in your previous post. 
I would just like to make it clear that the finding of M4 was not that &quot;ML does not work&quot;. It was that pure ML methods, trained in a series by series fashion, are less accurate than traditional, statistical approaches. We have highlighted however the importance of exploiting ML elements for applying cross-learning, i.e., learning from multiple series to predict the individual ones. In fact, this is what Slawek (1st place) and Pablo (2nd place) did and this is exactly why they managed to win the competition. If it wasn&#039;t for ML algorithms, cross-learning wouldn&#039;t be possible to apply. 

You claim that the setting of the competition itself was not suitable for ML in the first place because you cannot expect non-linearity in yearly, quarterly or monthly series, which are also relatively short to allow effective training. This is not true. ML methods do not have to be necessarily fitted to each series individually. Those who did that, failed to get a high score, but the ones that trained their models across all 100,000 series of the competition got the highest scores. It is not about having long, non-linear data. It is about finding relationships between the series. Also, it is not about using ML methods. It is about the way you use them. Is this a super car or a super bike? I don&#039;t know, but it works nice. 

In my point of view, M4 promoted the utilization of ML in time series forecasting, showing how ML methods should be used to extract information from multiple series. Its large data-set allowed for such experimentation and lot of research is done in this area, expanding from ML to deep learning (see the excellent work done by Boris and his colleagues here https://arxiv.org/abs/1905.10437). Personally, I love ML and I don&#039;t have anything against its use. I am also pretty sure that the winner of M5 will utilize a ML-based method.

Finally, I would like you to know that I appreciate your current workload, making it difficult for you to participate. On the other hand, given that CMAF is the biggest forecasting center in the UK, it would be reasonable for some of its members to participate (maybe some PhD students under the supervision of yours and other senior researchers?). M5 would be an excellent opportunity for forecasters to do some forecasting and experiment with the tools they&#039;ve been developing for such purposes.]]></description>
			<content:encoded><![CDATA[<p>Hi Ivan,</p>
<p>Glad you like the set-up of M5. We also think it is reasonable and that it represents reality, at least as much as a competition allows in order for it to be relatively simple and comprehensible. I think M5 has a lot of potential and I am looking forward to reviewing its results.</p>
<p>Regarding M4, I think we&#8217;ve already &#8220;agreed we disagree&#8221; in a previous post of yours about both the accuracy measures used in the competition and the way its results were interpreted. </p>
<p>I won&#8217;t comment on the M4 measures here as I&#8217;ve already done that in your previous post.<br />
I would just like to make it clear that the finding of M4 was not that &#8220;ML does not work&#8221;. It was that pure ML methods, trained in a series by series fashion, are less accurate than traditional, statistical approaches. We have highlighted however the importance of exploiting ML elements for applying cross-learning, i.e., learning from multiple series to predict the individual ones. In fact, this is what Slawek (1st place) and Pablo (2nd place) did and this is exactly why they managed to win the competition. If it wasn&#8217;t for ML algorithms, cross-learning wouldn&#8217;t be possible to apply. </p>
<p>You claim that the setting of the competition itself was not suitable for ML in the first place because you cannot expect non-linearity in yearly, quarterly or monthly series, which are also relatively short to allow effective training. This is not true. ML methods do not have to be necessarily fitted to each series individually. Those who did that, failed to get a high score, but the ones that trained their models across all 100,000 series of the competition got the highest scores. It is not about having long, non-linear data. It is about finding relationships between the series. Also, it is not about using ML methods. It is about the way you use them. Is this a super car or a super bike? I don&#8217;t know, but it works nice. </p>
<p>In my point of view, M4 promoted the utilization of ML in time series forecasting, showing how ML methods should be used to extract information from multiple series. Its large data-set allowed for such experimentation and lot of research is done in this area, expanding from ML to deep learning (see the excellent work done by Boris and his colleagues here <a href="https://arxiv.org/abs/1905.10437" rel="nofollow ugc">https://arxiv.org/abs/1905.10437</a>). Personally, I love ML and I don&#8217;t have anything against its use. I am also pretty sure that the winner of M5 will utilize a ML-based method.</p>
<p>Finally, I would like you to know that I appreciate your current workload, making it difficult for you to participate. On the other hand, given that CMAF is the biggest forecasting center in the UK, it would be reasonable for some of its members to participate (maybe some PhD students under the supervision of yours and other senior researchers?). M5 would be an excellent opportunity for forecasters to do some forecasting and experiment with the tools they&#8217;ve been developing for such purposes.</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
