2021
DOI: 10.1038/s41467-021-23771-z
|View full text |Cite
|
Sign up to set email alerts
|

Standard assessments of climate forecast skill can be misleading

Abstract: Assessments of climate forecast skill depend on choices made by the assessor. In this perspective, we use forecasts of the El Niño-Southern-Oscillation to outline the impact of bias-correction on skill. Many assessments of skill from hindcasts (past forecasts) are probably overestimates of attainable forecast skill because the hindcasts are informed by observations over the period assessed that would not be available to real forecasts. Differences between hindcast and forecast skill result from changes in mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 44 publications
(39 citation statements)
references
References 58 publications
0
25
0
Order By: Relevance
“…None of the runs show rain at my location in 10 days time at 4pm, so the "chance of rain" is given as 0%. However, comparisons between models and out-of-sample outcomes show that model-based weather predictions at these lead times are not fully reliable (see, e.g., Risbey et al [2021]).…”
Section: Model Frequencies Misinterpreted As Real-world Probabilitiesmentioning
confidence: 99%
See 4 more Smart Citations
“…None of the runs show rain at my location in 10 days time at 4pm, so the "chance of rain" is given as 0%. However, comparisons between models and out-of-sample outcomes show that model-based weather predictions at these lead times are not fully reliable (see, e.g., Risbey et al [2021]).…”
Section: Model Frequencies Misinterpreted As Real-world Probabilitiesmentioning
confidence: 99%
“…Where, as in seasonal climate forecasts, relevant out-of-sample testing is possible (data quality is high) but only a small amount of data is available (data quantity is low), similar trials can be undertaken using formal measures of reliability, but statistical confidence in the assessment will be lower. Additional forecast-outcome data may be generated using past data/conditions ("hindcasts") and these can provide a good quantitative measure of reliability, though with the caveat that they are not truly out-of-sample even where rigorous cross-validation approaches are employed [Risbey et al, 2021].…”
Section: Repeated Quantitative Evaluation Of Past Probabilistic Forecastsmentioning
confidence: 99%
See 3 more Smart Citations