2019
DOI: 10.1080/01621459.2019.1611140
|View full text |Cite
|
Sign up to set email alerts
|

Comparing and Weighting Imperfect Models Using D-Probabilities

Abstract: We propose a new approach for assigning weights to models using a divergence-based method (D-probabilities), relying on evaluating parametric models relative to a nonparametric Bayesian reference using Kullback-Leibler divergence. D-probabilities are useful in goodnessof-fit assessments, in comparing imperfect models, and in providing model weights to be used in model aggregation. D-probabilities avoid some of the disadvantages of Bayesian model probabilities, such as large sensitivity to prior choice, and ten… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 51 publications
0
8
0
Order By: Relevance
“…Probabilistic averaging of models (PAM) is a model averaging strategy that is able to capture this predictive uncertainty. PAM was introduced by Akaike in 1978 and is also known as 'model averaging' [31,32]. However, since different notions of model averaging have been established in the literature [17,18,[33][34][35], we refer to the averaging of predictive distributions as PAM.…”
Section: Probabilistic Averaging Of Modelsmentioning
confidence: 99%
“…Probabilistic averaging of models (PAM) is a model averaging strategy that is able to capture this predictive uncertainty. PAM was introduced by Akaike in 1978 and is also known as 'model averaging' [31,32]. However, since different notions of model averaging have been established in the literature [17,18,[33][34][35], we refer to the averaging of predictive distributions as PAM.…”
Section: Probabilistic Averaging Of Modelsmentioning
confidence: 99%
“…van der Vaart and van Zanten (2009) proposed a fully Bayesian scheme by endowing the standard error σ with a hyperprior, which is supported on a compact interval [a, b] ⊂ (0, ∞) that contains σ 0 with a Lebesgue density bounded away from zero. This approach has been followed by many others (Bhattacharya et al, 2014;Li and Dunson, 2020). de Jonge and van Zanten (2013) showed a Bernstein-von Mises theorem for the marginal posterior of σ, where the prior for σ is relaxed to be supported on (0, ∞).…”
Section: Unknown Error Variancementioning
confidence: 99%
“…Moreover, George Box's famous adage "all models are wrong" may then be invoked to question the use of these "M-closed tools" in any practical application. For instance, Li and Dunson (2016) argue that "Philosophically, in order to interpret pr(M j | y (n) ) as a model probability, one must rely on the (arguably always flawed) assumption that one of the models in the list M is exactly true, known as the M-closed case. "…”
Section: Loo Is Motivated By An Illusory Distinction Between M-open Tmentioning
confidence: 99%