2004
DOI: 10.1111/j.0006-341x.2004.00233.x
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Variable Selection in Multinomial Probit Models to Identify Molecular Signatures of Disease Stage

Abstract: Here we focus on discrimination problems where the number of predictors substantially exceeds the sample size and we propose a Bayesian variable selection approach to multinomial probit models. Our method makes use of mixture priors and Markov chain Monte Carlo techniques to select sets of variables that differ among the classes. We apply our methodology to a problem in functional genomics using gene expression profiling data. The aim of the analysis is to identify molecular signatures that characterize two di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
134
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 116 publications
(134 citation statements)
references
References 29 publications
0
134
0
Order By: Relevance
“…This covers values of c inducing a lot of regularisation as well as values inducing very little and significantly extends the guideline range of Sha et al (2004) for these data. Applying their guidelines leads to a range of (0.1, 2.27) for the Arthritis dataset and (0.1, 2.26) for the Colon Tumour dataset.…”
Section: Estimation Of C Using Predictive Criteriamentioning
confidence: 71%
See 4 more Smart Citations
“…This covers values of c inducing a lot of regularisation as well as values inducing very little and significantly extends the guideline range of Sha et al (2004) for these data. Applying their guidelines leads to a range of (0.1, 2.27) for the Arthritis dataset and (0.1, 2.26) for the Colon Tumour dataset.…”
Section: Estimation Of C Using Predictive Criteriamentioning
confidence: 71%
“…The latter is typically linked to successful variable selection, which is our main concern in the type of applications considered here. Interestingly, the guideline range for choosing c proposed in Sha et al (2004) covers our preferred value in one of the datasets we examine here, but remains very far from this optimal value in the other 4 .…”
Section: Discussionmentioning
confidence: 94%
See 3 more Smart Citations