2014
DOI: 10.1080/01621459.2013.869223
|View full text |Cite
|
Sign up to set email alerts
|

EMVS: The EM Approach to Bayesian Variable Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
308
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 233 publications
(310 citation statements)
references
References 25 publications
2
308
0
Order By: Relevance
“…The sslasso retains the advantages of two popular methods for high-dimensional data analysis (V. Ročková and E. I. George, unpublished results), i.e., Bayesian variable selection McCulloch 1993, 1997;Chipman 1996;Chipman et al 2001;Ročková and George 2014), and the penalized lasso (Tibshirani 1996(Tibshirani , 1997Hastie et al 2015), and bridges these two methods into one unifying framework. Similar to the lasso, the proposed method can shrink many coefficients exactly to zero, thus automatically achieving variable selection and yielding easily interpretable results.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The sslasso retains the advantages of two popular methods for high-dimensional data analysis (V. Ročková and E. I. George, unpublished results), i.e., Bayesian variable selection McCulloch 1993, 1997;Chipman 1996;Chipman et al 2001;Ročková and George 2014), and the penalized lasso (Tibshirani 1996(Tibshirani , 1997Hastie et al 2015), and bridges these two methods into one unifying framework. Similar to the lasso, the proposed method can shrink many coefficients exactly to zero, thus automatically achieving variable selection and yielding easily interpretable results.…”
Section: Discussionmentioning
confidence: 99%
“…The mixture normal priors cannot shrink coefficients exactly to zero, and thus cannot automatically perform variable selection. Ročková and George (2014) developed an expectationmaximization (EM) algorithm to fit large-scale linear models with the mixture normal priors.…”
mentioning
confidence: 99%
“…Inducing a variant of "selective shrinkage" Rao, 2005, 2011;Rockova, 2015), posterior modes under the SSL prior are adaptively thresholded, smaller values shrunk to exact zero. This is in sharp contrast to spike-and-slab priors with a Gaussian spike (George and McCulloch, 1993;Rockova and George, 2014), whose non-sparse posterior modes must be thresholded for variable selection. The exact sparsity here is crucial for anchoring on interpretable factor orientations and thus alleviating identifiability issues.…”
Section: Infinite Factor Model With Spike-and-slab Lassomentioning
confidence: 99%
“…It is in this context, when there are large numbers of parameters relative to the sample size, that spike-and-slab priors have become popular. The seminal article for assessing covariates in this context is George and McCulloch (1993), and recent conceptual and computational advances, say from Scott and Berger (2010) and Ročková and George (2014), make the approach feasible in increasing large big-data contexts.…”
Section: The Potential Of Spike-and-slab Models In Psychologymentioning
confidence: 99%