2019
DOI: 10.31234/osf.io/wgb64
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A conceptual introduction to Bayesian Model Averaging

Abstract: Many statistical scenarios initially involve several candidate models that describe the data-generating process. Analysis often proceeds by first selecting the best model according to some criterion, and then learning about the parameters of this selected model. Crucially however, in this approach the parameter estimates are conditioned on the selected model, and any uncertainty about the model selection process is ignored. An alternative is to learn the parameters for all candidate models, and then combine th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
31
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 25 publications
(31 citation statements)
references
References 36 publications
0
31
0
Order By: Relevance
“…We further used the Bayesian model averaging (BMA) method (Hinne, Gronau, van den Bergh, & Wagenmakers, 2020 ) to weigh evidence for the interaction between cue condition and task load. This method weighs evidence for a particular effect across models that include the effect of interest against models that are stripped of the effect of interest.…”
Section: Methodsmentioning
confidence: 99%
“…We further used the Bayesian model averaging (BMA) method (Hinne, Gronau, van den Bergh, & Wagenmakers, 2020 ) to weigh evidence for the interaction between cue condition and task load. This method weighs evidence for a particular effect across models that include the effect of interest against models that are stripped of the effect of interest.…”
Section: Methodsmentioning
confidence: 99%
“…Bayesian Model Averaging is closely related to linear pooling methods because both methods provide a linear combination of the elicitation results. However, while the weights can be determined independent of the data in linear pooling, they are defined as the Bayesian posterior model probability in Bayesian model averaging (Hinne et al, 2019). Bayesian…”
Section: Mathematical Aggregationmentioning
confidence: 99%
“…To test for specific fixed effects in linear mixed models, we obtained p-values using the Kenward-Roger method, and BFs through Bayesian model averaging by estimating the change from prior to posterior inclusion odds (inclusion BF). In other words, this model-averaged BF indicates how much more likely the data are under model variants that include a given fixed effect, compared to model variants that exclude the fixed effect (Hinne et al ., 2020).…”
Section: Methodsmentioning
confidence: 99%