2000
DOI: 10.1109/72.839014
|View full text |Cite
|
Sign up to set email alerts
|

Bayes-optimality motivated linear and multilayered perceptron-based dimensionality reduction

Abstract: Dimensionality reduction is the process of mapping high-dimension patterns to a lower dimension subspace. When done prior to classification, estimates obtained in the lower dimension subspace are more reliable. For some classifiers, there is also an improvement in performance due to the removal of the diluting effect of redundant information. A majority of the present approaches to dimensionality reduction are based on scatter matrices or other statistics of the data which do not directly correlate to classifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2004
2004
2011
2011

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 13 publications
0
10
0
Order By: Relevance
“…Therefore, a need arises to determine relatively small number of variables, distinctive for each class [28,29]. FS addresses the dimensionality reduction problem, where high dimension patterns are mapped to a lower dimension sub-space [18,19]. FS determines a subset of optimal features to build a good classification model.…”
Section: Feature Selectionmentioning
confidence: 99%
“…Therefore, a need arises to determine relatively small number of variables, distinctive for each class [28,29]. FS addresses the dimensionality reduction problem, where high dimension patterns are mapped to a lower dimension sub-space [18,19]. FS determines a subset of optimal features to build a good classification model.…”
Section: Feature Selectionmentioning
confidence: 99%
“…Hereby, there are total nine graphical features of three groups for each class . [7]. Classification accuracy was obtained using a k-NN technique with the optimal value of k selected using cross-validation.…”
Section: Experimental Analysismentioning
confidence: 99%
“…For instance, optimal criterion of PCA is not directly related to training criteria of its counterparts for pattern classification. Since training of the classification part always aims to realize low error probabilities, it may not always be possible for PCA to extract features in a reduced form, containing high discriminant information [9], [10]. On the other hand, it should be noticed that, in the existing methods, training processes for PCA and the classification part are made separately.…”
Section: Introductionmentioning
confidence: 99%
“…RD-LLGMN uses orthogonal transformation to project the original feature space into a lowerdimensional space, and then calculates posterior probabilities with a Gaussian mixture model (GMM) in the projected lower-dimensional space for classification. Also, parameters in the network is trained with a single criterion, i.e., minimizing an error probability, it is expected that such training algorithm may yield better classification performance [1], [9].…”
Section: Introductionmentioning
confidence: 99%