1997
DOI: 10.1080/00949659708811856
|View full text |Cite
|
Sign up to set email alerts
|

Variable selection and error rate estimation in discriminant analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

1998
1998
2015
2015

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…where i denotes the samples from 1 to N. All PLSR models were also validated using a cross model validation routine, which is a two-layer cross-validation and is regarded as more conservative than full cross-validation. 21 In the current study, ten segments of randomly chosen samples were used as default settings. The prediction error of the cross model validated calibration model was expressed as the root mean square error of cross model validation (RMSECMV), resembling the RMSECV.…”
Section: Methodsmentioning
confidence: 99%
“…where i denotes the samples from 1 to N. All PLSR models were also validated using a cross model validation routine, which is a two-layer cross-validation and is regarded as more conservative than full cross-validation. 21 In the current study, ten segments of randomly chosen samples were used as default settings. The prediction error of the cross model validated calibration model was expressed as the root mean square error of cross model validation (RMSECMV), resembling the RMSECV.…”
Section: Methodsmentioning
confidence: 99%
“…with H = (I q , I q ) and G = (I q , −I q ), where I q denotes the identity matrix of order q. Performance of the plug-in discriminant functions is usually evaluated by an actual error rate (AER) (Le Roux et al, 1997).…”
Section: Error Rates Of Classificationmentioning
confidence: 99%
“…These adaptations can be employed in all-subset selection algorithms for DA, canonical correlation analysis (CCA), or for the description of any``effect'' in MANOVA or MANCOVA models. Celeux [2] and Le Roux et al [16] proposed the direct application of the original leaps bounds algorithm in order to identify the best subsets of each size in two-group DA. Exceptions include: McCabe [18], who adapted Furnival's algorithm to the comparison (according to Wilk's 4) of variable subsets in DA and McHenry [19], who proposed a compromise between stepwise and all-subsets procedures in multivariate linear models.…”
Section: Introductionmentioning
confidence: 99%
“…A possibility in that regard is to use cross-validation techniques that explicitly take into account the selection process (i.e., Snapinn and Knoke [31], Rutter et al [29], Le Roux et al [16]). Although any data-based variable selection procedure usually leads to violations of the assumptions underlying classical inference methods, that should be no reason for ignoring the data in the variable selection process.…”
Section: Introductionmentioning
confidence: 99%