Water transport and the fate of solutes in the vadose zone are among the important processes soil scientists have been trying to quantify for a long time to address emerging environmental problems. When investigating these processes with conceptual or deterministic models, we have to provide information about the hydraulic properties of the vadose zone under investigation. One approach to obtain this information is the sampling of undisturbed vadose zone cores that are subsequently analyzed in the laboratory. Unfortunately, it has been a major challenge to integrate these small-scale measurements of the soil hydraulic properties into hydrologic models that apply Soil Sci. Soc. Am. J. 72:305-319 doi:10.2136/sssaj2007.017
Selecting a “best” model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the “best” trade‐off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)
A Bayesian model averaging (BMA) framework is presented to evaluate the worth of different observation types and experimental design options for (1) more confidence in model selection and (2) for increased predictive reliability. These two modeling tasks are handled separately because model selection aims at identifying the most appropriate model with respect to a given calibration data set, while predictive reliability aims at reducing uncertainty in model predictions through constraining the plausible range of both models and model parameters. For that purpose, we pursue an optimal design of measurement framework that is based on BMA and that considers uncertainty in parameters, measurements, and model structures. We apply this framework to select between four crop models (the vegetation components of CERES, SUCROS, GECROS, and SPASS), which are coupled to identical routines for simulating soil carbon and nitrogen turnover, soil heat and nitrogen transport, and soil water movement. An ensemble of parameter realizations was generated for each model using Monte-Carlo simulation. We assess each model's plausibility by determining its posterior weight, which signifies the probability to have generated a given experimental data set. Several BMA analyses were conducted for different data packages with measurements of soil moisture, evapotranspiration (ET a ), and leaf area index (LAI). The posterior weights resulting from the different BMA runs were compared to the weight distribution of a reference run with all data types to investigate the utility of different data packages and monitoring design options in identifying the most appropriate model in the ensemble. We found that different (combinations of) data types support different models and none of the four crop models outperforms all others under all data scenarios. The best model discrimination was observed for those data where the competing models disagree the most. The data worth for reducing prediction uncertainty depends on the prediction to be made. LAI data have the highest utility for predicting ET a , while soil moisture data are better for predicting soil water drainage. Our study illustrates, that BMA provides an objective framework for data worth analysis with respect to both model discrimination and model calibration for a wide range of applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.