Leading NWP centers have agreed to create a database of their operational ensemble forecasts and open access to researchers to accelerate the development of probabilistic forecasting of high-impact weather.Objectives and cOncept. During the past decade, ensemble forecasting has undergone rapid development in all parts of the world. Ensembles are now generally accepted as a reliable approach to forecast confidence estimation, especially in the case of high-impact weather. Their application to quantitative probabilistic forecasting is also increasing rapidly. In addition, there has been a strong interest in the development of multimodel ensembles, whether based on a set of single (deterministic) forecasts from different systems, or on a set of ensemble forecasts from different systems (the so-called superensemble). The hope is that multimodel ensembles will provide an affordable approach to the classical goal of increasing the hit rate for prediction of high-impact weather without increasing the false-alarm rate. This is being taken further within The Observing System Research and Predictability Experiment (THORPEX), a major component of the World Weather Research Programme (WWRP) under the World Meteorological Organization (WMO). A key goal of THORPEX is to accelerate improvements in
ABSTRACT:Research and development of new verification strategies and reassessment of traditional forecast verification methods has received a great deal of attention from the scientific community in the last decade. This scientific effort has arisen from the need to respond to changes encompassing several aspects of the verification process, such as the evolution of forecasting systems, or the desire for more meaningful verification approaches that address specific forecast user requirements. Verification techniques that account for the spatial structure and the presence of features in forecast fields, and which are designed specifically for high-resolution forecasts have been developed. The advent of ensemble forecasts has motivated the re-evaluation of some of the traditional scores and the development of new verification methods for probability forecasts. The expected climatological increase of extreme events and their potential socio-economical impacts have revitalized research studies addressing the challenges concerning extreme event verification. Verification issues encountered in the operational forecasting environment have been widely discussed, verification needs for different user communities have been identified, and models to assess the forecast value for specific users have been proposed. Proper verification practice and correct interpretation of verification statistics has been extensively promoted with recent publications and books, tutorials and workshops, and the development of open-source software and verification tools. This paper addresses some of the current issues in forecast verification, reviews some of the most recently developed verification techniques, and provides recommendations for future research.
The International Grand Global Ensemble (TIGGE) was a major component of The Observing System Research and Predictability Experiment (THORPEX) research program, whose aim is to accelerate improvements in forecasting high-impact weather. By providing ensemble prediction data from leading operational forecast centers, TIGGE has enhanced collaboration between the research and operational meteorological communities and enabled research studies on a wide range of topics. The paper covers the objective evaluation of the TIGGE data. For a range of forecast parameters, it is shown to be beneficial to combine ensembles from several data providers in a multimodel grand ensemble. Alternative methods to correct systematic errors, including the use of reforecast data, are also discussed. TIGGE data have been used for a range of research studies on predictability and dynamical processes. Tropical cyclones are the most destructive weather systems in the world and are a focus of multimodel ensemble research. Their extratropical transition also has a major impact on the skill of midlatitude forecasts. We also review how TIGGE has added to our understanding of the dynamics of extratropical cyclones and storm tracks. Although TIGGE is a research project, it has proved invaluable for the development of products for future operational forecasting. Examples include the forecasting of tropical cyclone tracks, heavy rainfall, strong winds, and flood prediction through coupling hydrological models to ensembles. Finally, the paper considers the legacy of TIGGE. We discuss the priorities and key issues in predictability and ensemble forecasting, including the new opportunities of convective-scale ensembles, links with ensemble data assimilation methods, and extension of the range of useful forecast skill.
Bayesian model averaging (BMA) has recently been proposed as a way of correcting underdispersion in ensemble forecasts. BMA is a standard statistical procedure for combining predictive distributions from different sources. The output of BMA is a probability density function (pdf), which is a weighted average of pdfs centered on the bias-corrected forecasts. The BMA weights reflect the relative contributions of the component models to the predictive skill over a training sample. The variance of the BMA pdf is made up of two components, the between-model variance, and the within-model error variance, both estimated from the training sample. This paper describes the results of experiments with BMA to calibrate surface temperature forecasts from the 16-member Canadian ensemble system. Using one year of ensemble forecasts, BMA was applied for different training periods ranging from 25 to 80 days. The method was trained on the most recent forecast period, then applied to the next day’s forecasts as an independent sample. This process was repeated through the year, and forecast quality was evaluated using rank histograms, the continuous rank probability score, and the continuous rank probability skill score. An examination of the BMA weights provided a useful comparative evaluation of the component models, both for the ensemble itself and for the ensemble augmented with the unperturbed control forecast and the higher-resolution deterministic forecast. Training periods around 40 days provided a good calibration of the ensemble dispersion. Both full regression and simple bias-correction methods worked well to correct the bias, except that the full regression failed to completely remove seasonal trend biases in spring and fall. Simple correction of the bias was sufficient to produce positive forecast skill out to 10 days with respect to climatology, which was improved by the BMA. The addition of the control forecast and the full-resolution model forecast to the ensemble produced modest improvement in the forecasts for ranges out to about 7 days. Finally, BMA produced significantly narrower 90% prediction intervals compared to a simple Gaussian bias correction, while achieving similar overall accuracy.
Verification scientists and practitioners came together at the 5 th International Verification Methods Workshop in Melbourne, Australia, in December 2011 to discuss methods for evaluating forecasts within a wide variety of applications. Progress has been made in many areas including improved verification reporting, wider use of diagnostic verification, development of new scores and techniques for difficult problems, and evaluation of forecasts for applications using meteorological information. There are many interesting challenges, particularly the improvement of methods to verify high resolution ensemble forecasts, seamless predictions spanning multiple spatial and temporal scales, and multivariate forecasts. Greater efforts are needed to make best use of new observations, forge greater links between data assimilation and verification, and develop better and more intuitive forecast verification products for end-users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.