2020
DOI: 10.1177/2515245919898657
|View full text |Cite
|
Sign up to set email alerts
|

A Conceptual Introduction to Bayesian Model Averaging

Abstract: Many statistical scenarios initially involve several candidate models that describe the data-generating process. Analysis often proceeds by first selecting the best model according to some criterion and then learning about the parameters of this selected model. Crucially, however, in this approach the parameter estimates are conditioned on the selected model, and any uncertainty about the model-selection process is ignored. An alternative is to learn the parameters for all candidate models and then combine the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
217
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 234 publications
(219 citation statements)
references
References 65 publications
1
217
0
1
Order By: Relevance
“…However, only presenting conditional posterior distributions can potentially be misleading in cases where the null hypothesis remains relatively plausible after seeing the data. A general benefit of Bayesian analysis is that one can compute anunconditional posterior distribution for the parameter using model averaging (e.g., Clyde, Ghosh, & Littman, 2011;Hinne, Gronau, Bergh, & Wagenmakers, 2020). An unconditional posterior distribution for a parameter accounts for both the uncertainty about the parameter within any one model and the uncertainty about the model itself, providing an estimate of the parameter that is a compromise between the candidate models (for more details see Hoeting, Madigan, Raftery, & Volinsky, 1999).…”
Section: Stage 3: Interpreting the Resultsmentioning
confidence: 99%
“…However, only presenting conditional posterior distributions can potentially be misleading in cases where the null hypothesis remains relatively plausible after seeing the data. A general benefit of Bayesian analysis is that one can compute anunconditional posterior distribution for the parameter using model averaging (e.g., Clyde, Ghosh, & Littman, 2011;Hinne, Gronau, Bergh, & Wagenmakers, 2020). An unconditional posterior distribution for a parameter accounts for both the uncertainty about the parameter within any one model and the uncertainty about the model itself, providing an estimate of the parameter that is a compromise between the candidate models (for more details see Hoeting, Madigan, Raftery, & Volinsky, 1999).…”
Section: Stage 3: Interpreting the Resultsmentioning
confidence: 99%
“…Instead, we weighted the results from both models according to their posterior probability, thus fully acknowledging the uncertainty with respect to the choice between a fixed or random-effect model. [28,29] To conduct a Bayesian meta-analysis, prior distributions were assigned to the model parameters. [28] For the standardized effect size, we used a default, zero-centered Cauchy distribution with scale parameter equal to 1…”
Section: Model-averaged Bayesian Meta-analysismentioning
confidence: 99%
“…A general description of Bayesian testing, model averaging, and reporting is provided by van Doorn et al (2020), Jeffreys (1939), Ly et al (2016aLy et al ( , 2016b, and Ly et al (2020), whereas a more specialized account of Bayesian linear regression is given by Bayarri et al, (2012), Li and Clyde (2018), Liang et al, (2008), Rouder and Morey (2012), and Zellner and Siow (1980). For more on Bayesian model averaging we refer to Hinne et al (2020), Hoeting et al (1999), Scott and Berger (2006, 2020a, 2020b, and Wasserman (2000). Key software implementations in R are the Note.…”
Section: Further Informationmentioning
confidence: 99%