2004
DOI: 10.1175/1520-0434(2004)019<0769:dovcif>2.0.co;2
|View full text |Cite
|
Sign up to set email alerts
|

Discussion of Verification Concepts inForecast Verification: A Practitioner's Guide in Atmospheric Science

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2006
2006
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…Because of uncertainties in model formulation and in initial conditions, climate predictions in model space deviate away from the true evolution in observation space. Model predictions x should then be considered as proxy information that can be used to infer the probability of future observables y (Glahn 2004;Stephenson et al 2005;Jolliffe and Stephenson 2005). To make inferences about future observables, one needs a probability model that can give the probability p( y|x) of future observable quantities y when provided with model prediction data x, such as the model of Eq.…”
Section: Bayesian Forecast Assimilationmentioning
confidence: 99%
“…Because of uncertainties in model formulation and in initial conditions, climate predictions in model space deviate away from the true evolution in observation space. Model predictions x should then be considered as proxy information that can be used to infer the probability of future observables y (Glahn 2004;Stephenson et al 2005;Jolliffe and Stephenson 2005). To make inferences about future observables, one needs a probability model that can give the probability p( y|x) of future observable quantities y when provided with model prediction data x, such as the model of Eq.…”
Section: Bayesian Forecast Assimilationmentioning
confidence: 99%
“…This case is identical to the way that a trapezoidal ROC area is calculated given forecasts expressed as probabilities for categories [section 3a (3)]. Since the probabilities themselves are ignored when performing a ROC analysis (Mason and Graham 2002;Glahn 2004;Wilks 2006), probabilistic forecasts are reduced to forecasts of ordinal polychotomous categories. In the current context, therefore, the 2AFC score is equivalent to the standard way in which the ROC technique is performed in forecast verification, except that the probabilities associated with each point on the curve are undefined (Mason and Graham 2002).…”
Section: ) Polychotomous Forecastsmentioning
confidence: 99%
“…(7) is that the score considers only the ordering of the probabilities, ignoring the actual probability values themselves, and thus is insensitive to any monotonic transformation of the probabilities. This insensitivity is the same problem as the insensitivity of the ROC to calibration (Glahn 2004). Although 2AFC scores can be defined that explicitly consider the actual probabilities (appendix B), these scores are not proper (appendix C).…”
Section: ) Discrete Probabilistic Forecastsmentioning
confidence: 99%
“…Model predictions are best considered as proxy information that can be used to infer the probability of future observables (Wilks 2000;Glahn 2004;Stephenson et al 2005). Because of uncertainties in model formulation and in initial conditions, climate predictions of x drift away from the observed values y.…”
Section: Bayesian Forecast Assimilationmentioning
confidence: 99%