2011
DOI: 10.1002/qj.895
|View full text |Cite
|
Sign up to set email alerts
|

Observational probability method to assess ensemble precipitation forecasts

Abstract: It is common practice when assessing the skill of either deterministic or ensemble forecasts to consider the observations with no uncertainty. Observation uncertainty may be associated with different causes and the present paper discusses the uncertainty that derives from the mismatch between model-generated grid point precipitation and locally measured precipitation values. There have been many attempts to add uncertainty to the verification process; in the present paper the uncertainty is derived from the ob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
20
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(21 citation statements)
references
References 27 publications
1
20
0
Order By: Relevance
“…For other authors, the verifying distribution is a distribution of observations. This might be formed from a collection of actual observations, as in Gorgas and Dorninger (2012) and Santos and Ghelli (2012). In contrast, Candille and Talagrand (2008), Pappenberger et al (2009) and Pinson and Hagedorn (2012) form the verifying distribution by randomly perturbing an observation according to a probability model of observation error.…”
Section: Other Approaches To Observation Errormentioning
confidence: 99%
See 1 more Smart Citation
“…For other authors, the verifying distribution is a distribution of observations. This might be formed from a collection of actual observations, as in Gorgas and Dorninger (2012) and Santos and Ghelli (2012). In contrast, Candille and Talagrand (2008), Pappenberger et al (2009) and Pinson and Hagedorn (2012) form the verifying distribution by randomly perturbing an observation according to a probability model of observation error.…”
Section: Other Approaches To Observation Errormentioning
confidence: 99%
“…Other authors measure the difference between f and g with a divergence, that is a function, d ( f , g ), for which d ( g , g )=0 and d ( f , g )⩾0 for all f and g . For example, Candille and Talagrand (2008) use the quadratic divergence, ( f − g ) 2 , in the case of forecasting a binary event (also Santos and Ghelli, 2012), Pappenberger et al (2009) use the Kullback–Leibler divergence (or relative entropy), gnormallogfalse(gfalse/ffalse), and Friederichs and Thorarinsdottir (2012) propose the integrated quadratic distance, false(fgfalse)2. Thorarinsdottir et al (2013) list several other divergences, including the sub‐class of ‘score divergences’ that are formed from proper scoring rules, s , in the following way: d(f,g)=normalEy{s(f,y)}normalEy{s(g,y)}, where y ∼ g .…”
Section: Other Approaches To Observation Errormentioning
confidence: 99%
“…Methods are needed to account for, and ideally remove, the impact of observation errors on the verification results. Some small progress has been made in the areas of categorical (Bowler, 2006) and ensemble verification (Saetra et al ., 2004;Bowler, 2008;Candille and Talagrand, 2008;Santos and Ghelli, 2012), but this is proving a difficult problem to solve more generally. A promising approach may be to treat observations probabilistically, assuming the observation uncertainty is known (e.g., Friederichs et al ., 2009).…”
Section: Observationsmentioning
confidence: 99%
“…Previous studies point out the importance of taking into account observation errors in ensemble verification (e.g. Saetra et al, 2004;Bowler, 2008;Candille and Talagrand, 2008;Santos and Ghelli, 2012). As the forecast performance improves, the impact of observation errors on the verification becomes nonnegligible, especially at short lead times.…”
Section: Observations For Verificationmentioning
confidence: 99%