2018
DOI: 10.1175/mwr-d-17-0223.1
|View full text |Cite
|
Sign up to set email alerts
|

Sensitivity of Ensemble Forecast Verification to Model Bias

Abstract: This study demonstrates how model bias can adversely affect the quality assessment of an ensemble prediction system (EPS) by verification metrics. A regional EPS [Global and Regional Assimilation and Prediction Enhanced System-Regional Ensemble Prediction System (GRAPES-REPS)] was verified over a period of one month over China. Three variables (500-hPa and 2-m temperatures, and 250-hPa wind) are selected to represent “strong” and “weak” bias situations. Ensemble spread and probabilistic forecasts are compared … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
22
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 29 publications
(25 citation statements)
references
References 32 publications
2
22
0
1
Order By: Relevance
“…This provides quantitative evidence for the discrepancy between the optimal spread-error ratio of a raw biased ensemble and the optimal spread-error ratio of a bias-corrected ensemble with b * ≈ 0. This gives a quantitative basis in support of the advice by Wang et al (2018) to base NWP development on the verification of bias-corrected ensembles. The bias correction can be performed directly in the hoG model by setting b * to zero in Equation (11).…”
Section: Homogeneous Gaussian Forecast-observation Distributionsupporting
confidence: 61%
See 1 more Smart Citation
“…This provides quantitative evidence for the discrepancy between the optimal spread-error ratio of a raw biased ensemble and the optimal spread-error ratio of a bias-corrected ensemble with b * ≈ 0. This gives a quantitative basis in support of the advice by Wang et al (2018) to base NWP development on the verification of bias-corrected ensembles. The bias correction can be performed directly in the hoG model by setting b * to zero in Equation (11).…”
Section: Homogeneous Gaussian Forecast-observation Distributionsupporting
confidence: 61%
“…Wang et al . (2018) provide an example for the GRAPES‐REPS regional NWP ensemble. They show that the CRPS of 500 hPa temperature over East Asia is 40% larger in the raw forecast than in the bias‐corrected forecast.…”
Section: Introductionmentioning
confidence: 99%
“…However, the underdispersion has not been entirely fixed, since both experiments are characterized by spread values well below their corresponding RMSE for almost every variable and lead time. This may be because either the multiphysics or the multistochastic scheme is unable to address all sources of uncertainty, and observation error has not been taken into account; plus, model systematic errors that primarily derive from inherent deficiencies in the model [e.g., grid resolution, finite differences, representation of physics, dynamics (Harr et al 1983;Krishnamurti et al 2016)] may also contribute to the underdispersion in terms of verification metrics (Wang et al 2018).…”
Section: (Iii) Frequency Biasmentioning
confidence: 99%
“…Prediksi ensemble dapat memberikan prediksi yang baik dan buruk (Bauer et al, 2015). Prediksi ensemble yang buruk mengandung bias dan kesalahan dispersi karena belum terkoreksi atau terkalibrasi (Baran and Möller, 2017;Raftery et al, 2005;Wang et al, 2018). Bias disebabkan oleh sifat umum dari keluaran model prediksi dinamis cuaca dan iklim akibatnya nilai prediksi berbeda dari observasi (L'Heureux et al, 2016).…”
Section: Pendahuluanunclassified