2014
DOI: 10.1002/2014wr016062
|View full text |Cite
|
Sign up to set email alerts
|

Model selection on solid ground: Rigorous comparison of nine ways to evaluateBayesian model evidence

Abstract: Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
206
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 135 publications
(214 citation statements)
references
References 86 publications
(114 reference statements)
3
206
0
Order By: Relevance
“…(3) ), computed with respect to each tested conceptual models, are all larger than 10 100 . This result is in agreement with the findings by Schöniger et al (2014) : one decisive winning conceptual model is often obtained when using large data sets and small data errors. We also considered the field example described in Section 4.1 , but using less data (i.e., n = 224 instead of n = 3248 ) and we found (results not shown) that: (1) the isotropic multiGaussian model is still the winner, (2) all the evidence estimates are much larger (e.g., in the case of the isotropic multi-Gaussian model, the evidence increases from about 10 −10 0 0 to 10 −100 ) and that (3) the Bayes factors are much smaller (e.g., when comparing the multi-Gaussian model with vertical anisotropy and the one with isotropy, the Bayes factor decreases approximately from 10 190 to 10 10 ).…”
Section: Discussionsupporting
confidence: 93%
See 3 more Smart Citations
“…(3) ), computed with respect to each tested conceptual models, are all larger than 10 100 . This result is in agreement with the findings by Schöniger et al (2014) : one decisive winning conceptual model is often obtained when using large data sets and small data errors. We also considered the field example described in Section 4.1 , but using less data (i.e., n = 224 instead of n = 3248 ) and we found (results not shown) that: (1) the isotropic multiGaussian model is still the winner, (2) all the evidence estimates are much larger (e.g., in the case of the isotropic multi-Gaussian model, the evidence increases from about 10 −10 0 0 to 10 −100 ) and that (3) the Bayes factors are much smaller (e.g., when comparing the multi-Gaussian model with vertical anisotropy and the one with isotropy, the Bayes factor decreases approximately from 10 190 to 10 10 ).…”
Section: Discussionsupporting
confidence: 93%
“…For our set-up with small errors and high data and model dimensions, we found that reliable evidence estimation with the BFMC method would need prohibitive computation times. If the assumption of a multi-Gaussian posterior density is fulfilled (a reasonable assumption in our test cases), the LM method should provide reliable evidence estimates (see also case-studies by Schöniger et al (2014) ). This is confirmed in our synthetic study in Section 3 by the strong agreement at low model dimensions between BFMC and LM estimates evaluated around the MAP estimate.…”
Section: Discussionmentioning
confidence: 96%
See 2 more Smart Citations
“…BMA, relying on Bayes' theorem, is a well known statistical approach to perform quantitative comparisons of competing models [16,17]. The difficulty of BMA lies in the evaluation of a quantity referred to as the 'Bayesian model evidence' (BME), which involves an integral over the whole input space, so it generally has no analytical expression.…”
Section: Introductionmentioning
confidence: 99%