1982
DOI: 10.1016/0031-3203(82)90068-1
|View full text |Cite
|
Sign up to set email alerts
|

Linear dimension reduction and Bayes classification with unknown population parameters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

1983
1983
2015
2015

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(18 citation statements)
references
References 7 publications
0
18
0
Order By: Relevance
“…The performance of AIDA criterion was tested experimentally with real datasets and compared to the performances of LDA, ACC, the method proposed by Tubbs et al [14] (referred to as Tubbs), the Mahalanobis distance-based linear transformation method [15] (referred to as MLT) and a recently proposed manifold-based linear dimensionality reduction technique [24] (referred to as LPP). All of the above methods are based on the first two statistical moments, thus belonging to the class of second-order techniques.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The performance of AIDA criterion was tested experimentally with real datasets and compared to the performances of LDA, ACC, the method proposed by Tubbs et al [14] (referred to as Tubbs), the Mahalanobis distance-based linear transformation method [15] (referred to as MLT) and a recently proposed manifold-based linear dimensionality reduction technique [24] (referred to as LPP). All of the above methods are based on the first two statistical moments, thus belonging to the class of second-order techniques.…”
Section: Resultsmentioning
confidence: 99%
“…In many cases the performance of ACC criterion was comparable to that of LDA [10]. For datasets with prominent heteroscedasticity, ACC outperformed LDA and other similar eigenvalue-based techniques, such as the method of Tubbs et al [14] and the Mahalanobis distance-based linear transformation method in Ref. [15].…”
Section: Introductionmentioning
confidence: 85%
“…We now derive a new LDR method that is motivated by results on linear sufficient statistics derived by [13] and by a linear feature selection theorem given in [17]. The theorem provides necessary and sufficient conditions for which a low-dimensional linear transformation of the original data will preserve the BPMC in the original feature space.…”
Section: Linear Dimension Reduction For Multiple Heteroscedastic Nonsmentioning
confidence: 99%
“…When f x (x|H i ) is known, it is established that P ey ≥ P ex [7]. In [8,9], it is proven that P ey = P ex if and only if the transformed vector y is a sufficient statistic for x with respect to the classification hypotheses. For Gaussian classification problems, this happens when A can be obtained through full-rank decomposition of the following matrix [8].…”
Section: Mathematical Formulationmentioning
confidence: 99%