1987
DOI: 10.1109/tsmc.1987.4309029
|View full text |Cite
|
Sign up to set email alerts
|

Feature Selection for Automatic Classification of Non-Gaussian Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

1990
1990
2008
2008

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 68 publications
(23 citation statements)
references
References 23 publications
0
23
0
Order By: Relevance
“…The FS-K Means_BIC uses SFS [14] to search for feature subsets. The criterion used in this algorithm is the traceðS À1 W S B Þ normalized using a cross projection scheme [12].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The FS-K Means_BIC uses SFS [14] to search for feature subsets. The criterion used in this algorithm is the traceðS À1 W S B Þ normalized using a cross projection scheme [12].…”
Section: Resultsmentioning
confidence: 99%
“…Sequential forward selection (SFS) and sequential backward selection (SBS) [14] are two classical heuristic feature selection algorithms developed for supervised learning. SFS starts with an empty set of features, and at each iteration, the algorithm tentatively adds each available feature and selects the feature that results in the highest estimated performance.…”
Section: Local Searchesmentioning
confidence: 99%
“…Moreover, even if 99.9% computational cost saving is done by BB, the computational complexity remains exponential. Thus, BB or its improved version*relaxed BB (by introducing the concept of approximate monotonicity) [3]*are still unavailable for a large n.…”
Section: T S-f Error(s)(tmentioning
confidence: 99%
“…If u ≥ 1 2 then choose a model in NB IG(+1) with the probability |NB IG(+1) | −1 , otherwise choose a model in NB IG(−1) with the probability |NB IG(−1) | −1 . The normalization factor that ensures detailed balance, becomes…”
Section: Naive Bayes Classifiersmentioning
confidence: 99%
“…Important key references include [1][2][3][4]. Whether one wants to learn a statistical classifier from data, a graphical model or perform clustering in high-dimensional space, confining the number of feature variables included in the model by either feature selection or feature transformation, is often necessary.…”
Section: Introductionmentioning
confidence: 99%