1974
DOI: 10.1214/aos/1176342677
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Predictive Linear Discriminants

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
18
0

Year Published

1977
1977
2013
2013

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(18 citation statements)
references
References 0 publications
0
18
0
Order By: Relevance
“…Also, any sensible Bayesian rule will not lead to this approach, except either asymptotically or under very restrictive conditions [Enis and Geisser (1974)]. …”
Section: Discriminant Analysismentioning
confidence: 99%
“…Also, any sensible Bayesian rule will not lead to this approach, except either asymptotically or under very restrictive conditions [Enis and Geisser (1974)]. …”
Section: Discriminant Analysismentioning
confidence: 99%
“…Although the estimates of the covariance may be considered optimal in some sense, this does not mean that the resulting discriminant obtained by substituting these values is optimal in any sense, even if the assumption of normally distributed classes is correct (Anderson, 1984). Also, any sensible Bayesian rule will not lead to this approach, except either asymptotically or under very restrictive conditions (Enis and Geisser, 1986). Additionally, we have to estimate a priori probabilities.…”
Section: Linear Discriminant Analysismentioning
confidence: 99%
“…2 The paper demonstrates that full-dimensionality classification is not only possible but of equivalent accuracy to more sophisticated subspace classifiers. (ii) While the input feature space of contemporary face-image classifiers is often something other than raw pixels (e.g.…”
Section: Introductionmentioning
confidence: 97%
“…But the ''curse of dimensionality" means that huge numbers of training samples are required to exploit the plug-ins' asymptotic optimality. In practice, they are inaccurate, unreliable and different from any Bayesian modification of any reasonable prior [2]. For this reason, plug-in estimates must be improved before use in a probability model, usually by Regularization.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation