2019
DOI: 10.1002/met.1818
|View full text |Cite
|
Sign up to set email alerts
|

Statistical post‐processing of ensemble forecasts of temperature in Santiago de Chile

Abstract: Modelling forecast uncertainty is a difficult task in any forecasting problem. In weather forecasting a possible solution is the use of forecast ensembles, which are obtained from multiple runs of numerical weather prediction models with various initial conditions and model parametrizations to provide information about the expected uncertainty. Currently all major meteorological centres issue forecasts using their operational ensemble prediction systems. However, it is a general problem that the spread of the … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 62 publications
0
9
0
Order By: Relevance
“…Many studies have shown that a good choice for the length of the sliding‐window training period ranges from 20 to 40 days (e.g. Feldmann et al ., 2015; Möller et al ., 2015; Díaz et al ., 2019). To test the sensitivity of coefficient calibration to the length of the training period, we compared 15, 30 and 90 day sliding windows when estimating the calibration coefficients.…”
Section: Calibration and Verification Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Many studies have shown that a good choice for the length of the sliding‐window training period ranges from 20 to 40 days (e.g. Feldmann et al ., 2015; Möller et al ., 2015; Díaz et al ., 2019). To test the sensitivity of coefficient calibration to the length of the training period, we compared 15, 30 and 90 day sliding windows when estimating the calibration coefficients.…”
Section: Calibration and Verification Methodsmentioning
confidence: 99%
“…Other spatially adaptive approaches have been studied for temperature and wind speed forecasts. A similar idea as that here is used in cluster‐based approaches, where the calibrated stations are grouped by their altitude (Díaz et al ., 2019) or climatological characteristics (Lerch and Baran, 2017). Instead of grouping the stations, we try to include the station‐specific characteristics linearly to the model.…”
Section: Introductionmentioning
confidence: 99%
“…Diaz et al . (2019) pointed out that the distance measure should include the station climatology and ensemble forecast errors. Therefore, when using the K ‐means method for classifying these stations, the similarity between two stations is defined by the forecast performance (including calibration metric Δ, forecast error metric MAE and forecast skill metric CRPSS) and climate division.…”
Section: Methodsmentioning
confidence: 99%
“…For example, Lerch and Baran (2017) found using the distribution of forecast errors as the similarity measure augments the training data, which helps to improve the predictive performance of the post-processing methods. Diaz et al (2019) pointed out that the distance measure should include the station climatology and ensemble forecast errors. Therefore, when using the K-means method for classifying these stations, the similarity between two stations is defined by the forecast performance (including calibration metric Δ, forecast error metric MAE and forecast skill metric CRPSS) and climate division.…”
Section: Verification Metricsmentioning
confidence: 99%
“…Díaz et al . (2019) used the BMA and EMOS post‐processing methods for 2 min temperature probabilistic forecasts. They showed that these post‐processing methods result in a significant decrease in the continuous ranked probability scores (CRPS) of probabilistic forecasts.…”
Section: Introductionmentioning
confidence: 99%