## Disclaimer

*The idea of using complex variables in modelling and forecasting was originally proposed by my father, Sergey Svetunkov. Based on that, we developed several models, which were then used in some of our research. We worked together in this direction and published several articles in Russian. My father even published a monograph “Complex-Valued Modeling in Economics and Finance” based on that research.*

## Pre-PhD period

This story started in 2010 when I worked as an Associate Professor at the Higher School of Economics (HSE) in Saint Petersburg, Russia. By then, I had defended my candidate thesis (in Russia, this is considered an equivalent to a PhD) on the topic of “Complex Variables Production Functions”, and I was teaching Microeconomics, Econometrics and Forecasting to undergraduate students. On my way to work (which would typically take an hour), I would typically read or write something. On one of those days, I came up with the basic formula for Complex Exponential Smoothing, assigning the error term to the imaginary part of the number and using Brown’s Simple Exponential Smoothing as a basis for the new forecasting method. Just for comparison, here is the Simple Exponential Smoothing:

\begin{equation*}

\hat{y}_{t+1} = \alpha y_t + (1-\alpha) \hat{y}_{t} .

\end{equation*}

And here is what I came up with:

\begin{equation*}

\hat{y}_{t+1} + i \hat{\varsigma}_{t+1} = (\alpha_0 + i \alpha_1) (y_t + i \varsigma_t) + (1-\alpha_0 + i – i \alpha_1) (\hat{y}_{t} + i \hat{\varsigma}_{t}) .

\end{equation*}

I’m not explaining this formula in this post (you can read about it here). It is here just for demonstration. It was and still is a complicated forecasting method to understand, but the idea itself excited me. When I returned home, I continued the derivations and did some basic experiments in Excel. I developed the method further in 2010 and presented it in April 2011 at a conference on Business Informatics in Kharkiv, Ukraine (this is one of the cities that Russian army has been bombing in the war that Putin started with Ukraine on 24th February 2022). The idea was well received, and I had encouraging feedback. The first paper on CES was then published in Russian language in the proceedings of the conference (it is available in Russian here and here, p.11 – I used to call the method “Complex Exponentially Weighted Moving Average”, CEWMA back then).

After that, I started thinking of preparing a paper in English and submitting it to an international peer-reviewed journal. HSE had an excellent service, where people outside your department would read your paper and provide feedback. So I used that service after preparing the first draft in English in 2012 and got a review with several comments. One of them was helpful. It said that my paper lacked proper motivation and that, in its current state, it could not be published in a peer-reviewed international journal. However, the other comment was that my research area was uninteresting, nobody did anything like that in the academic world, and thus I should find a different area of research.

I disagreed with the latter point and, after minor modifications, submitted the paper to the International Journal of Forecasting (IJF). As expected, Rob Hyndman (back then, editor-in-chief of the journal) replied that the paper could not be published because it lacked motivation and because I failed to show that the approach worked. At that time, I did not know how to motivate the paper or how to modify it to make it publishable, so that was a dead end for that version of the paper. But I did not want to give up, so in 2012, I applied for a PhD in Management Science at Lancaster University, writing a proposal about my model.

## PhD period

I was admitted as a PhD student in 2013 with a scholarship from the Lancaster University Management School, and I started my work under the supervision of Nikolaos Kourentzes and Robert Fildes on the topic “Complex Exponential Smoothing”. After preparing a proper experiment, I received good results and wrote the first version of the R function `ces()`

. The results of this work were presented in my first International Symposium on Forecasting (ISF) in Rotterdam in 2014. Nobody noticed my presentation, and nobody seemed to care.

I then focused on rewriting the paper, Nikos helped me in writing up the motivation. After collecting feedback about the paper from our colleagues, we decided to submit it to a statistical journal. That was very arrogant of us – we did not understand how to write papers for such journals, and nobody in our group ever published there. As a result, we got a desk rejection from the Journal of American Statistical Association in 2015, saying that they do not publish forecasting papers.

In parallel, I started working on an extension of the CES for the seasonal time series, which I then presented at ISF2015 at Riverside, US. I then managed to discuss my research with Keith Ord, who expressed his interest in it and provided support and guidance for some parts of it. He even helped me with some derivations, which I included in the first paper.

To make things even more complicated, I continued work on my PhD and wrote a second paper, extending CES for seasonal time series. At the end of 2015, I resubmitted the first paper to Operations Research journal, where it got desk-rejected, and then to EJOR (European Journal of Operational Research). After a short discussion with Nikos, we decided to submit the second paper to IJF, hoping that the first will progress fast and that the two of them can be done in parallel. That was a fatal mistake, which impacted my academic career and mental well-being for the next several years.

Unfortunately, the first paper got rejected from EJOR after the second round of revision, with a second reviewer saying that it could not be published because we did not use the Diebold-Mariano test (yes, that was the reason. Note: we used Nemenyi instead). As for the second one, it got stuck in IJF. In the first round, the second reviewer said that the model has a fatal flaw and cannot be used in practice (he concluded that because he misunderstood how the model worked). In the second round, when we explained the model in more detail, the reviewer looked more carefully at CES and started criticising the first paper, which by then was published as a working paper. We placed ourselves in a challenging situation: we had to defend the first paper in the revision of the second one. This process led us to the third and then to the fourth round without significant progress. We were discussing the meaning of complex variables in the model and whether the imaginary part of the model makes sense instead of discussing the seasonal extension of CES. It was apparent that the model works (it performed better than ETS and ARIMA on the M competition data), but the reviewers had questions about the interpretation of the original model. In the fourth round, an Associate Editor of IJF has written that “*I still maintain view and so does reviewer 2 that there is an interesting paper lurking under this paper but we are yet to see it and evaluate it on its own merits*“. It became clear that we were not moving forward and that the only way out of this dead end would be to merge the two papers and restart the submission process – by then, we were discussing a completely different paper than the one submitted initially to IJF. I was not ready for this serious step, and I decided not to continue the revision process in IJF and put the paper on hold. By then, my publishing experience had been very disappointing and demotivating, and I struggled to continue doing anything in that research direction. Whenever I would open the paper, it would spoil my mood for the rest of the day, as I would think that it was unpublishable and that nobody needed my work (as I’ve been told repeatedly by many different people starting from 2010).

Nonetheless, somewhere in the middle of the IJF revision, at the end of 2016, I had my viva. I got PhD in Management Science defending the thesis on the topic “Complex Exponential Smoothing”.

## Post-PhD period

At the end of 2017, Fotios Petropoulos suggested me to participate in the M4 competition. His idea was to submit a combination of forecasts from several models: ETS, ARIMA, Theta and CES. After trying out several options, we used median for the combination (I must confess that we weren’t the first ones that did that, this was investigated, for example, by Jose & Winkler, 2008). This approach got to 6th place in the competition. We were invited to submit a paper explaining our approach, which was then published in IJF (Petropoulos & Svetunkov, 2020). That paper is the first paper published in a peer-reviewed journal discussing CES.

In 2018, during the ISF in Boulder, Nikos and I invited Keith Ord to join our paper – he supported me during my PhD and made a substantial contribution to the paper. We decided to clean the paper up, rewrite some parts, and submit it to a peer-reviewed journal as a paper from three co-authors. It took us some time to return to the original text, revive the R code and update the paper. In the middle of 2019, Nikos, Keith and I submitted the CES paper to the Journal of Time Series Analysis. It was a desk rejection with a comment that the Associate Editor “*…argues that your paper is a relatively straightforward extension of smoothing via a state space model*” and thus the paper “*is not appropriate for publication in this journal in terms of substantive content*“. We rewrote the motivation to align the paper with an OR-related journal and submitted it to Omega, to get another desk rejection saying that it is too mathematical for them and that the paper “*is quite technical and would likely be best served by targeting a journal in the time series or forecasting field instead*“.

Finally, at the end of 2019, we submitted the paper to Naval Research Logistics (NRL). By then, I did not have any expectations about the paper and was sure that it would either be a desk rejection or a rejection from reviewers – I had seen this outcome so many times that it would be naive to expect anything else to happen. However, this time we got an Associate Editor who liked the idea and supported us from the first revision. In fact, they pointed out that CES has already been used in M4 competition and showed that it brought value. On 24th February 2021, we got our first round of revision, after which I decided to move some parts of paper 2 (seasonal CES) to the first one, merging the two. It made sense because the paper would now look complete. While one of the reviewers was sceptical about the paper, Associate Editor provided colossal support and guided us in what to change in the paper so that it could be accepted in NRL. After two rounds and some additional rewrites of the paper, on 18th June 2022, it was accepted for publication in Naval Research Logistics, and then published online on 2nd August 2022.

## Conclusions

Complex Exponential Smoothing is a complex idea, something that people are not used to. It stands out and does things differently, not the way the researchers typically do. This is what makes it interesting, and this is what made it extremely difficult to publish. Over the years, I questioned the correctness and usefulness of my idea many times. Some days I would be dancing around, singing “it works, it works” after a successful experiment; on others, I would throw it away, saying “never again” when the experiments failed. This is all part of academic life. However, the most challenging experience for me was the publication of the paper. Over the years, I have met a lot of resistance from the academic world.

I have not included here comments from my former Higher School of Economics colleagues or comments from some journal reviewers. They rarely were pleasant and supportive. Some people did not understand the idea, the others did not want to understand it. But there were always several people around me who helped and guided me. I would not be able to publish the paper in the end if it was not for the support from Nikos Kourentzes, Keith Ord, Sergey Svetunkov (my father) and Anna Sroginis (my wife). They believed in the idea and supported me even when it looked that it wouldn’t work. So, I am immensely grateful for their support. It has been a long and winding road… and I’m glad that it’s finally over.

As for the **lessons to learn** from this, I have several for you:

- Do not try publishing dependent papers in parallel: if your second paper depends on the first one, do not submit it before the first one is at least accepted.
- If you want to publish in a journal in which your group does not typically publish, find a person who does and work with them. That became apparent to me when I worked on a different paper with a colleague from a statistics department. Statistical journals have a completely different style than the OR ones, and we had no chance to publish CES paper there.
**As a reviewer**, you might not understand the paper you are reviewing. This is okay. We cannot know and understand everything instantaneously. But that does not mean that the paper is not good. It only means that you need to invest more time in understanding the paper and then help to improve it (yes, paper revision is a serious job, not a box-ticking process). I had many comments of the style “I did not understand it, so reject”. This is not how revisions should be done.

Last but not least, be critical of your ideas, but if you believe in something, stick with it and be patient. It might take a lot of time for other people to start appreciating what you have been trying to show them.

Thank you for sharing your story, and I agree with two out of your three lessons.

I disagree with one point: if I as a reviewer do not understand a paper, it is usually not my responsibility to invest a lot of time. After all, I am supposedly an expert in the field. (If it turns out that I am assigned a paper in which I am not an expert, then I should notify the editor and withdraw.) Thus, I am precisely the target audience of the papers I review. And therefore, the *author* needs to invest every reasonable effort to make their paper understandable – after all, if the reviewer, an expert, does not understand it, how will later readers understand it?

And yes, there is frequently a tension between “I do not understand X, but the paper is good, so please explain X better” and “I do not understand X, and the paper is generally weak, so I recommend rejection”. I have heard it said that the job of the reviewer is to weed out bad papers and make good papers better. One can err in either direction. But nobody is happy if I get a paper I believe is a publishable-if-better-explained one, and we then have three review rounds before the editor and I are convinced that the author is *not* capable of explaining their idea better, and then the paper is rejected after three rounds.

Thanks for your comment, Stephan! And I actually agree with you. Maybe my point wasn’t very clear (I feel disturbance in the force :D). Let me clarify. I’ve faced many cases, when a reviewer who was supposed to be an expert in the area (because they accepted to review the paper) did not know much about forecasting and did not even make an effort to understand the paper and judged it hastily, providing comments like “you do not cite papers on forecasting with high frequency data, so I recommend rejection” (this is one of the comments I received for CES). So, this is just a sloppy revision, and my point in the post is that the reviewers need to understand that revision is a serious job. Yes, a reviewer should look critically at the paper and they should help making it more understandable if it is not well written. But the revision is a process done by two sides, not just one. So, if the paper gets to the fourth round without progress, this means that the reviewer and the authors are speaking different languages, and both sides should make an effort to understand each other (preferably, much earlier than on the fourth round). However, there are reviewers that do not want to make that effort and prefer just to get rid of the paper, so that they do not need to do the job, but at the same time can claim that they review papers in this and that journal.

And yes, there are good reviewers as well, I had several during this journey. They were responsible and helpful. And yes, the authors should write papers which are easy to understand. The points above mainly apply to those reviewers who think that doing a sloppy job is fine.