Abstract. We undertook a comprehensive evaluation of 22 gridded (quasi-)global (sub-)daily precipitation (P ) datasets for the period 2000-2016. Thirteen non-gauge-corrected P datasets were evaluated using daily P gauge observations from 76 086 gauges worldwide. Another nine gaugecorrected datasets were evaluated using hydrological modeling, by calibrating the HBV conceptual model against streamflow records for each of 9053 small to mediumsized ( < 50 000 km 2 ) catchments worldwide, and comparing the resulting performance. Marked differences in spatiotemporal patterns and accuracy were found among the datasets. Among the uncorrected P datasets, the satellite-and reanalysis-based MSWEP-ng V1.2 and V2.0 datasets generally showed the best temporal correlations with the gauge observations, followed by the reanalyses (ERA-Interim, JRA-55, and NCEP-CFSR) and the satellite-and reanalysis-based CHIRP V2.0 dataset, the estimates based primarily on passive microwave remote sensing of rainfall (CMORPH V1.0, GSMaP V5/6, and TMPA 3B42RT V7) or near-surface soil moisture (SM2RAIN-ASCAT), and finally, estimates based primarily on thermal infrared imagery (GridSat V1.0, PER-SIANN, and PERSIANN-CCS). Two of the three reanalyses (ERA-Interim and JRA-55) unexpectedly obtained lower trend errors than the satellite datasets. Among the corrected P datasets, the ones directly incorporating daily gauge data (CPC Unified, and MSWEP V1.2 and V2.0) generally provided the best calibration scores, although the good performance of the fully gauge-based CPC Unified is unlikely to translate to sparsely or ungauged regions. Next best results were obtained with P estimates directly incorporating temporally coarser gauge data (CHIRPS V2.0, GPCP-1DD V1.2, TMPA 3B42 V7, and WFDEI-CRU), which in turn outperformed the one indirectly incorporating gauge data through another multi-source dataset (PERSIANN-CDR V1R1). Our results highlight large differences in estimation accuracy, and hence the importance of P dataset selection in both research and operational applications. The good performance of MSWEP emphasizes that careful data merging can exploit the complementary strengths of gauge-, satellite-, and reanalysis-based P estimates.
Abstract. ERA-Interim/Land is a global land surface reanalysis data set covering the period 1979–2010. It describes the evolution of soil moisture, soil temperature and snowpack. ERA-Interim/Land is the result of a single 32-year simulation with the latest ECMWF (European Centre for Medium-Range Weather Forecasts) land surface model driven by meteorological forcing from the ERA-Interim atmospheric reanalysis and precipitation adjustments based on monthly GPCP v2.1 (Global Precipitation Climatology Project). The horizontal resolution is about 80 km and the time frequency is 3-hourly. ERA-Interim/Land includes a number of parameterization improvements in the land surface scheme with respect to the original ERA-Interim data set, which makes it more suitable for climate studies involving land water resources. The quality of ERA-Interim/Land is assessed by comparing with ground-based and remote sensing observations. In particular, estimates of soil moisture, snow depth, surface albedo, turbulent latent and sensible fluxes, and river discharges are verified against a large number of site measurements. ERA-Interim/Land provides a global integrated and coherent estimate of soil moisture and snow water equivalent, which can also be used for the initialization of numerical weather prediction and climate models.
[1] Uncertainty analysis of models has received increasing attention over the last two decades in water resources research. However, a significant part of the community is still reluctant to embrace the estimation of uncertainty in hydrological and hydraulic modeling. In this paper, we summarize and explore seven common arguments: uncertainty analysis is not necessary given physically realistic models; uncertainty analysis cannot be used in hydrological and hydraulic hypothesis testing; uncertainty (probability) distributions cannot be understood by policy makers and the public; uncertainty analysis cannot be incorporated into the decision-making process; uncertainty analysis is too subjective; uncertainty analysis is too difficult to perform; uncertainty does not really matter in making the final decision. We will argue that none of the arguments against uncertainty analysis rehearsed are, in the end, tenable. Moreover, we suggest that one reason why the application of uncertainty analysis is not normal and expected part of modeling practice is that mature guidance on methods and applications does not exist. The paper concludes with suggesting that a Code of Practice is needed as a way of formalizing such guidance.Citation: Pappenberger, F., and K. J. Beven (2006), Ignorance is bliss: Or seven reasons not to use uncertainty analysis, Water Resour. Res., 42, W05302,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.