In 2006, the National Centers for Environmental Prediction (NCEP) implemented the Real-Time Mesoscale Analysis (RTMA) in collaboration with the Earth System Research Laboratory and the National Environmental, Satellite, and Data Information Service (NESDIS). In this work, a description of the RTMA applied to the 5-km resolution conterminous U.S. grid of the National Digital Forecast Database is given. Its two-dimensional variational data assimilation (2DVAR) component used to analyze near-surface observations is described in detail, and a brief discussion of the remapping of the NCEP stage II quantitative precipitation amount and NESDIS Geostationary Operational Environmental Satellite (GOES) sounder effective cloud amount to the 5-km grid is offered. Terrain-following background error covariances are used with the 2DVAR approach, which produces gridded fields of 2-m temperature, 2-m specific humidity, 2-m dewpoint, 10-m U and V wind components, and surface pressure. The estimate of the analysis uncertainty via the Lanczos method is briefly described. The strength of the 2DVAR is illustrated by (i) its ability to analyze a June 2007 cold temperature pool over the Washington, D.C., area; (ii) its fairly good analysis of a December 2008 mid-Atlantic region high-wind event that started from a very weak first guess; and (iii) its successful recovery of the finescale moisture features in a January 2010 case study over southern California. According to a cross-validation analysis for a 15-day period during November 2009, root-mean-square error improvements over the first guess range from 16% for wind speed to 45% for specific humidity.
The U.S. National Blend of Models provides statistically postprocessed, high-resolution multimodel ensemble guidance, providing National Weather Service forecasters with a calibrated, downscaled starting point for producing digital forecasts. Forecasts of 12-hourly probability of precipitation (POP12) over the contiguous United States are produced as follows: 1) Populate the forecast and analyze cumulative distribution functions (CDFs) to be used later in quantile mapping. Were every grid point processed without benefit of data from other points, 60 days of training data would likely be insufficient for estimating CDFs and adjusting the errors in the forecast. Accordingly, “supplemental” locations were identified for each grid point, and data from the supplemental locations were used to populate the forecast and analyzed CDFs used in the quantile mapping. 2) Load the real-time U.S. and Environment Canada (now known as Environment and Climate Change Canada) global deterministic and ensemble forecasts, interpolated to ⅛°. 3) Using CDFs from the past 60 days of data, apply a deterministic quantile mapping to the ensemble forecasts. 4) Dress the resulting ensemble with random noise. 5) Generate probabilities from the ensemble relative frequency. 6) Spatially smooth the forecast using a Savitzky–Golay smoother, applying more smoothing in flatter areas. Forecasts of 6-hourly quantitative precipitation (QPF06) are more simply produced as follows: 1) Form a grand ensemble mean, again interpolated to ⅛°. 2) Quantile map the mean forecast using CDFs of the ensemble mean and analyzed distributions. 3) Spatially smooth the field, similar to POP12. Results for spring 2016 are provided, demonstrating that the postprocessing improves POP12 reliability and skill, as well as the deterministic forecast bias, while maintaining sharpness and spatial detail.
Experimental gridded forecasts of surface temperature issued by National Weather Service offices in the western United States during the 2003/04 winter season (18 November 2003–29 February 2004) are evaluated relative to surface observations and gridded analyses. The 5-km horizontal resolution gridded forecasts issued at 0000 UTC for forecast lead times at 12-h intervals from 12 to 168 h were obtained from the National Digital Forecast Database (NDFD). Forecast accuracy and skill are determined relative to observations at over 3000 locations archived by MesoWest. Forecast quality is also determined relative to Rapid Update Cycle (RUC) analyses at 20-km resolution that are interpolated to the 5-km NDFD grid as well as objective analyses obtained from the Advanced Regional Prediction System Data Assimilation System that rely upon the MesoWest observations and RUC analyses. For the West as a whole, the experimental temperature forecasts issued at 0000 UTC during the 2003/04 winter season exhibit skill at lead times of 12, 24, 36, and 48 h on the basis of several verification approaches. Subgrid-scale temperature variations and observational and analysis errors undoubtedly contribute some uncertainty regarding these results. Even though the “true” values appropriate to evaluate the forecast values on the NDFD grid are unknown, it is estimated that the root-mean-square errors of the NDFD temperature forecasts are on the order of 3°C at lead times shorter than 48 h and greater than 4°C at lead times longer than 120 h. However, such estimates are derived from only a small fraction of the NDFD grid boxes. Incremental improvements in forecast accuracy as a result of forecaster adjustments to the 0000 UTC temperature grids from 144- to 24-h lead times are estimated to be on the order of 13%.
Federal, state, and other wildland resource management agencies contribute to the collection of weather observations from over 1000 Remote Automated Weather Stations (RAWS) in the western United States. The impact of RAWS observations on surface objective analyses during the 2003/04 winter season was assessed using the Advanced Regional Prediction System (ARPS) Data Assimilation System (ADAS). A set of control analyses was created each day at 0000 and 1200 UTC using the Rapid Update Cycle (RUC) analyses as the background fields and assimilating approximately 3000 surface observations from Meso-West. Another set of analyses was generated by withholding all of the RAWS observations available at each time while 10 additional sets of analyses were created by randomly withholding comparable numbers of observations obtained from all sources.Random withholding of observations from the analyses provides a baseline estimate of the analysis quality. Relative to this baseline, removing the RAWS observations degrades temperature (wind speed) analyses by an additional 0.5°C (0.9 m s Ϫ1 ) when evaluated in terms of rmse over the entire season. RAWS temperature observations adjust the RUC background the most during the early morning hours and during winter season cold pool events in the western United States while wind speed observations have a greater impact during active weather periods. The average analysis area influenced by at least 1.0°C (2.5°C) by withholding each RAWS observation is on the order of 600 km 2 (100 km 2 ). The spatial influence of randomly withheld observations is much less.
The terrain between grid points is used to modify locally the background error correlation matrix in an objective analysis system. This modification helps to reduce the influence across mountain barriers of corrections to the background field that are derived from surface observations. This change to the background error correlation matrix is tested using an analytic case of surface temperature that encapsulates the significant features of nocturnal radiation inversions in mountain basins, which can be difficult to analyze because of locally sharp gradients in temperature. Bratseth successive corrections, optimal interpolation, and three-dimensional variational approaches are shown to yield exactly the same surface temperature analysis. Adding the intervening terrain term to the Bratseth approach led to solutions that match more closely the specified analytic solution. In addition, the convergence of the Bratseth solutions to the best linear unbiased estimation of the analytic solution is faster. The intervening terrain term was evaluated in objective analyses over the western United States derived from a modified version of the Advanced Regional Prediction System Data Assimilation System. Local adjustment of the background error correlation matrix led to improved surface temperature analyses by limiting the influence of observations in mountain valleys that may differ from the weather conditions present in adjacent valleys.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.