Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.
[1] Ensemble Kalman filters (EnKFs) are a successful tool for estimating state variables in atmospheric and oceanic sciences. Recent research has prepared the EnKF for parameter estimation in groundwater applications. EnKFs are optimal in the sense of Bayesian updating only if all involved variables are multivariate Gaussian. Subsurface flow and transport state variables, however, generally do not show Gaussian dependence on hydraulic log conductivity and among each other, even if log conductivity is multi-Gaussian. To improve EnKFs in this context, we apply nonlinear, monotonic transformations to the observed states, rendering them Gaussian (Gaussian anamorphosis, GA). Similar ideas have recently been presented by Béal et al. (2010) in the context of state estimation. Our work transfers and adapts this methodology to parameter estimation. Additionally, we address the treatment of measurement errors in the transformation and provide several multivariate analysis tools to evaluate the expected usefulness of GA beforehand. For illustration, we present a first-time application of an EnKF to parameter estimation from 3-D hydraulic tomography in multi-Gaussian log conductivity fields. Results show that (1) GA achieves an implicit pseudolinearization of drawdown data as a function of log conductivity and (2) this makes both parameter identification and prediction of flow and transport more accurate. Combining EnKFs with GA yields a computationally efficient tool for nonlinear inversion of data with improved accuracy. This is an attractive benefit, given that linearization-free methods such as particle filters are computationally extremely demanding.
A Bayesian model averaging (BMA) framework is presented to evaluate the worth of different observation types and experimental design options for (1) more confidence in model selection and (2) for increased predictive reliability. These two modeling tasks are handled separately because model selection aims at identifying the most appropriate model with respect to a given calibration data set, while predictive reliability aims at reducing uncertainty in model predictions through constraining the plausible range of both models and model parameters. For that purpose, we pursue an optimal design of measurement framework that is based on BMA and that considers uncertainty in parameters, measurements, and model structures. We apply this framework to select between four crop models (the vegetation components of CERES, SUCROS, GECROS, and SPASS), which are coupled to identical routines for simulating soil carbon and nitrogen turnover, soil heat and nitrogen transport, and soil water movement. An ensemble of parameter realizations was generated for each model using Monte-Carlo simulation. We assess each model's plausibility by determining its posterior weight, which signifies the probability to have generated a given experimental data set. Several BMA analyses were conducted for different data packages with measurements of soil moisture, evapotranspiration (ET a ), and leaf area index (LAI). The posterior weights resulting from the different BMA runs were compared to the weight distribution of a reference run with all data types to investigate the utility of different data packages and monitoring design options in identifying the most appropriate model in the ensemble. We found that different (combinations of) data types support different models and none of the four crop models outperforms all others under all data scenarios. The best model discrimination was observed for those data where the competing models disagree the most. The data worth for reducing prediction uncertainty depends on the prediction to be made. LAI data have the highest utility for predicting ET a , while soil moisture data are better for predicting soil water drainage. Our study illustrates, that BMA provides an objective framework for data worth analysis with respect to both model discrimination and model calibration for a wide range of applications.
Bayesian model averaging (BMA) ranks the plausibility of alternative conceptual models according to Bayes' theorem. A prior belief about each model's adequacy is updated to a posterior model probability based on the skill to reproduce observed data and on the principle of parsimony. The posterior model probabilities are then used as model weights for model ranking, selection, or averaging. Despite the statistically rigorous BMA procedure, model weights can become uncertain quantities due to measurement noise in the calibration data set or due to uncertainty in model input. Uncertain weights may in turn compromise the reliability of BMA results. We present a new statistical concept to investigate this weighting uncertainty, and thus, to assess the significance of model weights and the confidence in model ranking. Our concept is to resample the uncertain input or output data and then to analyze the induced variability in model weights. In the special case of weighting uncertainty due to measurement noise in the calibration data set, we interpret statistics of Bayesian model evidence to assess the distance of a model's performance from the theoretical upper limit. To illustrate our suggested approach, we investigate the reliability of soil‐plant model selection following up on a study by Wöhling et al. (2015). Results show that the BMA routine should be equipped with our suggested upgrade to (1) reveal the significant but otherwise undetected impact of measurement noise on model ranking results and (2) to decide whether the considered set of models should be extended with better performing alternatives.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.