Abstract. As all hydrological models are intrinsically limited hypotheses on the behaviour of catchments, models -which attempt to represent real-world behaviour -will always remain imperfect. To make progress on the long road towards improved models, we need demanding tests, i.e. true crash tests. Efficient testing requires large and varied data sets to develop and assess hydrological models, to ensure their generality, to diagnose their failures, and ultimately, help improving them.
Results indicate that in the two study basins, no single model performed best in all cases. In addition, no distributed model was able to consistently outperform the lumped model benchmark. However, one or more distributed models were able to outperform the lumped model benchmark in many of the analyses. Several calibrated distributed models achieved higher correlation and lower bias than the calibrated lumped benchmark in the calibration, validation, and combined periods. Evaluating a number of specific precipitation-runoff events, one calibrated distributed model was able to perform at a level equal to or better than the calibrated lumped model benchmark in terms of event-averaged peak and runoff volume error. However, three distributed models were able to provide improved peak timing compared to the lumped benchmark. Taken together, calibrated distributed models provided specific improvements over the lumped benchmark in 24% of the model-basin pairs for peak flow, 12% of the model-basin pairs for event runoff volume, and 41% of the model-basin pairs for peak timing. Model calibration improved the performance statistics of nearly all models (lumped and distributed). Analysis of several precipitation/runoff events indicates that distributed models may more accurately model the dynamics of the rain/snow line (and resulting hydrologic conditions) compared to the lumped benchmark model. Analysis of SWE simulations shows that better results were achieved at higher elevation observation sites. Although the performance of distributed models was mixed compared to the lumped benchmark, all calibrated models performed well compared to results in the DMIP 2 Oklahoma basins in terms of run period correlation and %Bias, and event-averaged peak and runoff error. This finding is noteworthy considering that these Sierra Nevada basins have complications such as orographicallyenhanced precipitation, snow accumulation and melt, rain on snow events, and highly variable topography. Looking at these findings and those from the previous DMIP experiments, it is clear that at this point in their evolution, distributed models have the potential to provide valuable information on specific flood events that could complement lumped model simulations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.