1983
DOI: 10.1287/mnsc.29.4.447
|View full text |Cite
|
Sign up to set email alerts
|

Effective Scoring Rules for Probabilistic Forecasts

Abstract: This paper studies the use of a scoring rule for the elicitation of forecasts in the form of probability distributions and for the subsequent evaluation of such forecasts. Given a metric (distance function) on a space of probability distributions, a scoring rule is said to be effective if the forecaster's expected score is a strictly decreasing function of the distance between the elicited and "true" distributions. Two simple, well-known rules (the spherical and the quadratic) are shown to be effective with re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

1990
1990
2018
2018

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(47 citation statements)
references
References 8 publications
0
47
0
Order By: Relevance
“…A unifying perspective on these two families of rules, which might help to provide some guidance concerning appropriate values of , has hitherto been lacking. Friedman (1983) attempted to identify scoring rules with metrics (rather than divergences) on the probability space, but most metrics turn out not to have associated scoring rules, and vice versa, as shown by Nau (1985). More recently, Selten (1998) has discussed the implications of different values of in the power scoring rule, arguing against the logarithmic rule ( = 1) because of its hypersensitivity to the estimation of small probabilities and in favor of the quadratic rule ( = 2) because the latter uniquely satisfies a certain axiom of "neutrality," namely, that the expected loss for reporting r when the true distribution is p is the same as the expected loss for reporting p when the true distribution is r, i.e., S p p − S r p = S r r − S p r .…”
Section: Weighted Scoring Rulesmentioning
confidence: 99%
“…A unifying perspective on these two families of rules, which might help to provide some guidance concerning appropriate values of , has hitherto been lacking. Friedman (1983) attempted to identify scoring rules with metrics (rather than divergences) on the probability space, but most metrics turn out not to have associated scoring rules, and vice versa, as shown by Nau (1985). More recently, Selten (1998) has discussed the implications of different values of in the power scoring rule, arguing against the logarithmic rule ( = 1) because of its hypersensitivity to the estimation of small probabilities and in favor of the quadratic rule ( = 2) because the latter uniquely satisfies a certain axiom of "neutrality," namely, that the expected loss for reporting r when the true distribution is p is the same as the expected loss for reporting p when the true distribution is r, i.e., S p p − S r p = S r r − S p r .…”
Section: Weighted Scoring Rulesmentioning
confidence: 99%
“…An effective score is one which monotonically improves as the distance (however it is measured) between the forecast and the observation decreases (Friedman, 1983;Nau, 1985). The most often quoted example of a score that is ineffective is the original version of the LEPS score, which for large numbers of categories could score a forecast with an extremely large error less severely than one with only a large error (Potts et al, 1996).…”
Section: Effectivenessmentioning
confidence: 99%
“…Each score in turn is then illustrated in the context of decadal forecasts of global mean temperature. Section 2 discusses several measures of forecast system performance, including the logarithmic score (Ignorance) (Good 1952;Roulston and Smith 2002), the Continuous Ranked Probability Score (CRPS) (Epstein 1969;Gneiting and Raftery 2007) and the Proper Linear score (PL) (Friedman 1983). General considerations for selecting a preferred score are discussed; CRPS is demonstrated capable of misleading behaviour.…”
Section: Introductionmentioning
confidence: 99%