2010
DOI: 10.1214/10-aos792
|View full text |Cite
|
Sign up to set email alerts
|

Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem

Abstract: This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens automatically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham's-razor effect. Our second goal is to contrast empirical-Bayes and fully Bayesian approaches to variable selection through examples, theoretical results and simulations. Considerable differences between the two approache… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
436
0
1

Year Published

2014
2014
2021
2021

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 496 publications
(440 citation statements)
references
References 29 publications
3
436
0
1
Order By: Relevance
“…There are different ways to set the model priors. For example, we can use a binomial prior for the number of causal SNPs, which assumes that the probability of each SNP being causal is independent and equal (Guan and Stephens 2011;Hormozdiari et al 2014 where #ðjM c jÞ is the total number of models with model size jM c j: We can also use a Beta-binomial distribution to introduce uncertainty on p; as discussed in Scott and Berger (2010). Different implications between the binomial prior and the Beta-binomial prior were also discussed and summarized in table 2 of Wilson et al (2010).…”
Section: Priorsmentioning
confidence: 99%
See 1 more Smart Citation
“…There are different ways to set the model priors. For example, we can use a binomial prior for the number of causal SNPs, which assumes that the probability of each SNP being causal is independent and equal (Guan and Stephens 2011;Hormozdiari et al 2014 where #ðjM c jÞ is the total number of models with model size jM c j: We can also use a Beta-binomial distribution to introduce uncertainty on p; as discussed in Scott and Berger (2010). Different implications between the binomial prior and the Beta-binomial prior were also discussed and summarized in table 2 of Wilson et al (2010).…”
Section: Priorsmentioning
confidence: 99%
“…When all SNPs are included in the set, the probability r is the posterior probability that there is at least 1 causal SNP in the region. Because direct maximization of r by selecting k SNPs from p SNPs is computationally prohibitive, we follow the same stepwise selection as in Hormozdiari et al where #ðjM c jÞ is the total number of models with model size jM c j: We can also use a Beta-binomial distribution to introduce uncertainty on p; as discussed in Scott and Berger (2010). Different implications between the binomial prior and the Beta-binomial prior were also discussed and summarized in table 2 of Wilson et al (2010).…”
mentioning
confidence: 99%
“…This is a more general model, which subsumes a fixed π as a limiting case for α π β π /((α π + β π ) 2 (α π + β π + 1)) → 0 and has also been shown to act as a multiplicity correction in Scott and Berger (2010).…”
Section: Spike and Slab Priorsmentioning
confidence: 99%
“…(2010) show that -in contrast to a posterior density for σ 2 ε j obtained when imposing a standard Inverse Gamma prior on the variance parameter σ 2 ε j -the posterior density of σ ε j is not very sensitive to the hyperparameters of the Gaussian distribution and is not pushed away from zero when σ 2 ε j = 0. 7 The reason that we check the robustness of our results to alternative priors for p 0 is that, as noted by Scott and Berger (2010), the prior choice p 0 = 0.5 does not provide multiplicity control for the Bayesian variable -in our case, factorselection. When the number of possible variables is large and each of the binary indicators has a prior probability p 0 = 0.5 of being equal to one, the fraction of selected variables will very likely be around 0.5.…”
Section: Priors For Model Selectionmentioning
confidence: 88%