1998
DOI: 10.1111/1467-9868.00144
|View full text |Cite
|
Sign up to set email alerts
|

Multivariate Bayesian Variable Selection and Prediction

Abstract: The multivariate regression model is considered with p regressors. A latent vector with p binary entries serves to identify one of two types of regression coef®cients: those close to 0 and those not. Specializing our general distributional setting to the linear model with Gaussian errors and using natural conjugate prior distributions, we derive the marginal posterior distribution of the binary latent vector. Fast algorithms aid its direct computation, and in high dimensions these are supplemented by a Markov … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
284
0
1

Year Published

2006
2006
2015
2015

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 317 publications
(285 citation statements)
references
References 23 publications
0
284
0
1
Order By: Relevance
“…The central component of the M-H algorithm is to propose a random move from the current sample, and the proposal is accepted with a certain probability such that the algorithm converges to the target distribution. Following [27,26], the random move, c * , is proposed using the "birth" and "death" steps from the current model, c:…”
Section: The Proposed Mcmc Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…The central component of the M-H algorithm is to propose a random move from the current sample, and the proposal is accepted with a certain probability such that the algorithm converges to the target distribution. Following [27,26], the random move, c * , is proposed using the "birth" and "death" steps from the current model, c:…”
Section: The Proposed Mcmc Methodsmentioning
confidence: 99%
“…, K}, each associated with a vector of unknown parameters, θ k . The task of model selection is to identify which model is more probable given the data D. Within a Bayesian framework, this is equivalent to assessing the following probability for each of the K models [26,27,36,37]:…”
Section: Bayesian Model Selection Using Mcmcmentioning
confidence: 99%
See 3 more Smart Citations