2021
DOI: 10.1175/bams-d-19-0093.1
|View full text |Cite
|
Sign up to set email alerts
|

The Model Evaluation Tools (MET): More than a Decade of Community-Supported Forecast Verification

Abstract: Capsule summary MET is a community-based package of state-of-the-art tools to evaluate predictions of weather, climate, and other phenomena, with capabilities to display and analyze verification results via the METplus system.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
35
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(36 citation statements)
references
References 59 publications
0
35
0
1
Order By: Relevance
“…All comparisons were made using NCAR's Model Evaluation Tools V9.0 (MET) package (Brown et al, 2020), utilizing a nearest-grid cell approach on an hourly temporal resolution.…”
Section: Verification and Diagnosticsmentioning
confidence: 99%
See 1 more Smart Citation
“…All comparisons were made using NCAR's Model Evaluation Tools V9.0 (MET) package (Brown et al, 2020), utilizing a nearest-grid cell approach on an hourly temporal resolution.…”
Section: Verification and Diagnosticsmentioning
confidence: 99%
“…The namelist.input file, which is used for the WRF configuration, and scripts for running WRF in NWP mode are uploaded with open access to Zenodo: https://doi.org/10.5281/zenodo.3894491 Model Evaluation Tools V9.0 (MET) from the NCAR Research Applications Laboratory (generation of verification statistics) are open source and available from https://ral.ucar. edu/solutions/products/model-evaluation-tools-met(Brown et al, 2020). The NCAR Command Language (NCL) V6.2 (2019) consists of open-source graphics and is used for overwriting soil moisture data when running NWP mode.…”
mentioning
confidence: 99%
“…In fact, any forecast that correctly predicts the occurrence of highly localized heavy rain, may incur the so called "double penalty" error [90] if it places the event in a nearby area, producing, for example, a root mean squared error (RMSE, see [91]) higher than another forecast which completely misses the prediction. To overcome such a limitation, object-based verification methods have been developed by the scientific community [92], and currently, software packages for practical applications exist [93]. These methods exhibit some drawbacks, such as the smoothing and filtering the observations undergo, and the large number of parameters whose settings are somewhat arbitrary.…”
Section: Quantitative Precipitation Forecast Verificationmentioning
confidence: 99%
“…In fact, any forecast that correctly predicts the occurrence of a highly localized heavy rain, may incur in the so called "double penalty" error [81] if it places the event in a nearby area, producing, for example, a root mean squared error (RMSE, see [82]) higher than another forecast which completely misses the prediction. To overcome such a limitation, object-based verification methods have been developed by the scientific community [83] and, currently, software packages for practical applications exist [84]. Anyway these methods exhibit some drawbacks, like the smoothing and filtering the observations undergo and the large number of parameters whose setting turns out somewhat arbitrary.…”
Section: Quantitative Precipitation Forecast Verificationmentioning
confidence: 99%