Q: According to the M4 Guide, there are some “benchmark methods” which “are not eligible for the prizes”. Is it possible for a participant to use these methods?
A: The “benchmarks methods” the Guide refers to are those whose code is provided in the M4 Competition GitHub. These methods are not eligible in the exact form (code) deposited in GitHub. If a participant, however, could improve the accuracy of a forecasting method included in the benchmarks, e.g., by modifying the code or by utilizing different combinations (ensembles) of methods other than the one listed in GitHub, then such method/combination is eligible for Prizes as it is not any more part of the “benchmarks”.
Q: Does the method being used have to be applied on ALL of the data, or could one use different methods, let’s say based on the series frequency? For instance, let’s say I want to use temporal hierarchies. They can be used on sub yearly data, but not yearly since there is no hierarchy to estimate. This would require me to use a different method on annual data.
A: The aim of the M4 Competition is to learn as much as possible about ways to improve forecasting accuracy so any innovative idea for doing so is encouraged. This means the answer to the above question is yes, a different method can be used for different frequencies and categories of data, or even for time series with varying characteristics (e.g., Simple Exponential Smoothing for series with high randomness or Holt’s for series with a consistent trend). However, the selection process must be clearly explained and enough information must be provided so that it could be properly replicated. Ideally, the code for performing model selection should become available in the M4 Competition GitHub.