2016
DOI: 10.1016/j.jmva.2015.09.004
|View full text |Cite
|
Sign up to set email alerts
|

Combining a relaxed EM algorithm with Occam’s razor for Bayesian variable selection in high-dimensional regression

Abstract: Please cite this article as: P. Latouche, P.-A. Mattei, C. Bouveyron, J. Chiquet, Combining a relaxed EM algorithm with Occam's razor for Bayesian variable selection in high-dimensional regression, Journal of Multivariate Analysis (2015), http://dx. AbstractWe address the problem of Bayesian variable selection for high-dimensional linear regression. We consider a generative model that uses a spike-and-slab-like prior distribution obtained by multiplying a deterministic binary vector, which traduces the sparsit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 50 publications
0
5
0
Order By: Relevance
“…While choosing such a data-dependent prior might be disconcerting, it can be seen as an approximation to a fully Bayesian approach that would use a prior distribution for η (MacKay, 1994, Section 6.3). Moreover, it leads to very good empirical and theoretical performances in several contexts, such as linear regression (Cui and George, 2008;Liang et al, 2008;Latouche et al, 2016) or principal component analysis (Bouveyron et al, 2018). In a sense, the empirical Bayes maximization problem is equivalent to performing continuous model selection by contemplating E d as the model space.…”
Section: Parameter Prior Probabilities and The Jeffreys-lindley Paradoxmentioning
confidence: 99%
See 2 more Smart Citations
“…While choosing such a data-dependent prior might be disconcerting, it can be seen as an approximation to a fully Bayesian approach that would use a prior distribution for η (MacKay, 1994, Section 6.3). Moreover, it leads to very good empirical and theoretical performances in several contexts, such as linear regression (Cui and George, 2008;Liang et al, 2008;Latouche et al, 2016) or principal component analysis (Bouveyron et al, 2018). In a sense, the empirical Bayes maximization problem is equivalent to performing continuous model selection by contemplating E d as the model space.…”
Section: Parameter Prior Probabilities and The Jeffreys-lindley Paradoxmentioning
confidence: 99%
“…In some simple scenarios like linear regression, the Occam factor can directly be linked to the number of parameters (see e.g. Latouche et al, 2016)-this builds a direct bridge with 0 penalization. However, it is not always the case and the Occam factor penalty provides a more sensible regularization than those based on the number of parameters (Rasmussen and Ghahramani, 2001).…”
Section: Mackay's Occam Razor Interpretationmentioning
confidence: 99%
See 1 more Smart Citation
“…Then, once the variable are ranked, it is possible to optimize the evidence over the path of models . This procedure, first introduced by Latouche et al in a linear regression context, provides both the number q k and the list of active variables for the k th class.…”
Section: Model Inferencementioning
confidence: 99%
“…In the high-dimensional setting, Bayesian lasso, Bayesian adaptive lasso or the indicator model method, together with the Markov chain Monte Carlo (MCMC) algorithm, are widely used to select important variables. For example, see [19] for Bayesian lasso, ref [20] for Bayesian adaptive lasso and [21,22] for the EM approach in the Bayesian framework. The above-mentioned literature involves the implementation of the standard Gibbs sampler for posterior computation, which is not so scalable for large numbers of fixed-effects components [23].…”
Section: Introductionmentioning
confidence: 99%