2017
DOI: 10.5194/esd-2017-28
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Selecting a climate model subset to optimise key ensemble properties

Abstract: Abstract. End-users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally-weighted model m… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(30 citation statements)
references
References 22 publications
0
30
0
Order By: Relevance
“…That is, they should try to identify metrics that will be most informative about the model's adequacy for that purpose, giving greater attention in to performance on those that are thought to be most relevant. The selection of relevant performance metrics will often rely on “process understanding,” especially understanding of which processes in the system strongly shape the behavior or phenomenon of interest (Baumberger et al, ; Eyring et al, ; Herger et al, ). It is important to keep in mind, however, that for many purposes, the evaluation of a model's adequacy‐for‐purpose should consider more than just performance: the model's resolution, which simplifications and idealizations it includes, how and to which data it has been tuned, and so on, can all be relevant considerations.…”
Section: Evaluating Models: An Adequacy‐for‐purpose View (W Parker)mentioning
confidence: 99%
“…That is, they should try to identify metrics that will be most informative about the model's adequacy for that purpose, giving greater attention in to performance on those that are thought to be most relevant. The selection of relevant performance metrics will often rely on “process understanding,” especially understanding of which processes in the system strongly shape the behavior or phenomenon of interest (Baumberger et al, ; Eyring et al, ; Herger et al, ). It is important to keep in mind, however, that for many purposes, the evaluation of a model's adequacy‐for‐purpose should consider more than just performance: the model's resolution, which simplifications and idealizations it includes, how and to which data it has been tuned, and so on, can all be relevant considerations.…”
Section: Evaluating Models: An Adequacy‐for‐purpose View (W Parker)mentioning
confidence: 99%
“…Choosing one model per institute removes multiple initial condition members of the same model as well as similar or similarly calibrated models. By doing this the average model‐to‐model distances are expected to become more similar to the average model‐to‐observation distances (Herger et al, ). Indeed Figure S1a shows that for surface air temperature, the average Kolmogorov‐Smirnov (KS) test statistic between these 21 simulations and the land‐only gridded observational product CRU‐TS, v4.00 (Harris et al, ) is generally smaller than the mean model‐model KS value.…”
Section: Datamentioning
confidence: 99%
“…Furthermore, model‐simulated extremes may be systematically biased across various models compared to observations/reanalyses (Angélil et al, ; Bellprat & Doblas‐Reyes, ; Christensen et al, ; Donat et al, ; Wang et al, ), and therefore, taking the median or mean of the metric of interest across ensemble members can be unreliable (King & Karoly, ; King et al, ; Lewis & King, ; Perkins‐Kirkpatrick & Gibson, ). Such biases are not necessarily reduced after the poorest performing models have been removed from an ensemble; indeed, this process can reinforce model biases if metrics are not carefully chosen, since the best performing models might have common biases due to shared model development history (so‐called model interdependence; Herger et al, ).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations