2013
DOI: 10.1002/met.1409
|View full text |Cite
|
Sign up to set email alerts
|

Three recommendations for evaluating climate predictions

Abstract: ABSTRACT:Evaluation is important for improving climate prediction systems and establishing the credibility of their predictions of the future. This paper shows how the choices that must be made about how to evaluate predictions affect the outcome and ultimately our view of the prediction system's quality. The aim of evaluation is to measure selected attributes of the predictions, but some attributes are susceptible to having their apparent performance artificially inflated by the presence of climate trends, th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
40
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 38 publications
(40 citation statements)
references
References 52 publications
0
40
0
Order By: Relevance
“…(3) is not applicable for uncertain observations. This is the same for the suggestion of Fricker et al (2013), to not only calculate and verify predictions of perennial averages but also to conduct temporal pooling of the hindcasts for a particular period. This would be particularly useful from a climate impact perspective, as it is often a single winter exhibiting extreme frequency of intense cyclones, associated with large economic losses.…”
Section: Summary and Discussionmentioning
confidence: 84%
See 1 more Smart Citation
“…(3) is not applicable for uncertain observations. This is the same for the suggestion of Fricker et al (2013), to not only calculate and verify predictions of perennial averages but also to conduct temporal pooling of the hindcasts for a particular period. This would be particularly useful from a climate impact perspective, as it is often a single winter exhibiting extreme frequency of intense cyclones, associated with large economic losses.…”
Section: Summary and Discussionmentioning
confidence: 84%
“…Thus, in line with the argument of Fricker et al (2013), it would be of great value to verify the predictions of these shorter time-scale predictands, but the development of an alternative estimator of an unbiased RPS Á applicable for any kind of observation not necessarily constituting the Heaviside step function Á is beyond the scope of this study. However, such a development would be crucial for a fair assessment of any kind of probabilistic forecast skill based on the Brier score (BS), the RPS, or the continuous ranked probability score (CRPS).…”
Section: Summary and Discussionmentioning
confidence: 92%
“…While the importance of using proper scores is well recognised (Bröcker and Smith 2007;Fricker et al 2013), researchers often face requests to present results under a variety of scores. Indeed in the context of meteorological forecast evaluation there are several recommendations in the literature (Nurmi 2003;Randall et al 2007;World Meteorological Organization 2008;Fricker et al 2013;Goddard et al 2013), although often with little discussion of which attributes different scores aim to quantify, or their strengths and weaknesses in a particular forecast setting. By convention, a lower score is taken to reflect a better forecast.…”
Section: Measuring Forecast Performancementioning
confidence: 99%
“…It is useful to speak of the "True" distribution from which the outcome is drawn (hereafter, Q) without assuming that such a distribution exists in all cases of interest. Given a proper score, a forecast system providing Q will be preferred whenever it is included amongst those under consideration (Bröcker and Smith 2007;Fricker et al 2013). When this is not the case, then even proper scores may rank two forecast systems differently, making it difficult to provide definitive statements about forecast quality.…”
Section: Measuring Forecast Performancementioning
confidence: 99%
“…We may also wish to describe the performance of a set of predictions in terms of an aggregated measure such as a correlation coefficient or reliability statistic (e.g. Ferro and Fricker 2012;Fricker et al 2013). Some decision makers may prefer predictions of performance to be expressed qualitatively, as in 'the error of this climate prediction will probably be small', while others may prefer quantitative predictions, as in 'the error of this climate prediction will be less than 1…”
Section: Judging Credibilitymentioning
confidence: 99%