2018
DOI: 10.1007/978-3-319-99052-1_8
|View full text |Cite
|
Sign up to set email alerts
|

A Generic Method for Density Forecasts Recalibration

Abstract: We address the calibration constraint of probability forecasting. We propose a generic method for recalibration, which allows us to enforce this constraint. It remains to be known the impact on forecast quality, measured by predictive distributions sharpness, or specific scores. We show that the impact on the Continuous Ranked Probability Score (CRPS) is weak under some hypotheses and that it is positive under more restrictive ones. We used this method on temperature ensemble forecasts and compared the quality… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(10 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…Both performance criteria should be used. Focussing only on the CRPS may lead to choosing a forecasting system that is not optimal in terms of reliability as shown here and earlier studies (Collet & Richard, 2017; Wilks, 2018). But relying only on the JP flatness tests may also be misleading.…”
Section: Discussion About Probabilistic Forecast Selectionmentioning
confidence: 76%
See 4 more Smart Citations
“…Both performance criteria should be used. Focussing only on the CRPS may lead to choosing a forecasting system that is not optimal in terms of reliability as shown here and earlier studies (Collet & Richard, 2017; Wilks, 2018). But relying only on the JP flatness tests may also be misleading.…”
Section: Discussion About Probabilistic Forecast Selectionmentioning
confidence: 76%
“…The second approach to model selection among probabilistic forecasting systems is based on a scoring rule, such as the continuous ranked probabilistic score (CRPS, Matheson and Winkler, 1976): the selected model is the one that has the best value of the scoring rule (highest or lowest value depending on the scoring rule). The two approaches to model selection do not yield equivalent forecasts, as previously mentioned in different studies (Collet & Richard, 2017; Wilks, 2018). For instance, minimizing the CRPS may lead to forecasts that are not reliable.…”
Section: Introductionmentioning
confidence: 82%
See 3 more Smart Citations