1992
DOI: 10.1016/0169-2070(92)90008-w
|View full text |Cite
|
Sign up to set email alerts
|

Error measures for generalizing about forecasting methods: Empirical comparisons

Abstract: This study evaluated measures for making comparisons of errors across time series. We analyzed 90 annual and 101 quarterly economic time series. We judged error measures on reliability, construct validity, sensitivity to small changes, protection against outliers, and their relationship to decision making. The results lead us to recommend the Geometric Mean of the Relative Absolute Error (GMRAE) when the task involves calibrating a model for a set of time series. The GMRAE compares the absolute error of a give… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
453
0
16

Year Published

1998
1998
2017
2017

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 1,085 publications
(471 citation statements)
references
References 6 publications
2
453
0
16
Order By: Relevance
“…In the CNB case, a lack of any comprehensive (public) evaluation of its own GDPforecasting performance, despite heated media debate, feels inappropriate. Partial studies conducted by CNB analysts reveal more frequent combinations of error measures (ME, MAE, RMSE, MSE), but, on the other hand, no application of relative errors and even obscurities such as MAPE, whose application in the GDP environment (close to zero values) is consistently criticized (Armstrong and Collopy, 1992;Hyndman and Koehler, 2006). Although some of the papers indicate that the CNB forecasted more conservatively than the MF (Antoničová et al, 2009;Antal et al, 2008), a thorough empirical review or comparison of forecasting performance is missing.…”
Section: Literature Overviewmentioning
confidence: 99%
“…In the CNB case, a lack of any comprehensive (public) evaluation of its own GDPforecasting performance, despite heated media debate, feels inappropriate. Partial studies conducted by CNB analysts reveal more frequent combinations of error measures (ME, MAE, RMSE, MSE), but, on the other hand, no application of relative errors and even obscurities such as MAPE, whose application in the GDP environment (close to zero values) is consistently criticized (Armstrong and Collopy, 1992;Hyndman and Koehler, 2006). Although some of the papers indicate that the CNB forecasted more conservatively than the MF (Antoničová et al, 2009;Antal et al, 2008), a thorough empirical review or comparison of forecasting performance is missing.…”
Section: Literature Overviewmentioning
confidence: 99%
“…The selection of error measures to calibrate our model are based on other related studies (Hippert et al, 2001;Armstrong and Collopy, 1992).…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…In SubTask A, our model was ranked last because of a submission format error. We perform error measures in order to obtain a better understanding of the strengths of these particularly new tasks and to improve the performance about forecasting methods of our model (Armstrong and Collopy, 1992). For Subtask B, our team was ranked 24th from 29 teams.…”
Section: Introductionmentioning
confidence: 99%
“…• MAE (Mean absolute Error) szintén a becsült és valós értékek közötti eltéréseket méri, de kevésbé bünteti a nagy hibákat, mint az RMSE [121].…”
Section: Hibrid Szűrőkunclassified
“…A fenti mérőszámokról bővebb összefoglalót olvashatunk Armstrong és Collopy cikkében [121]. Az ajánlórendszereket azonban nem csak ajánlásaik pontossága alapján mérhetjük, hanem számtalan más tulajdonságukat is, úgy mint robosztusság, alkalmazkodókészség (adaptivity), megbízhatóság, skálázhatóság (scalability), hasznosság (utility), sokszínűség (diversity), lefedettség (coverage), stb.…”
Section: Azaz P=ip/(ip+hp) Melyet Más Néven Wallace-indexnek Is Neveunclassified