1992
DOI: 10.1109/72.125874
|View full text |Cite
|
Sign up to set email alerts
|

Fast generic selection of features for neural network classifiers

Abstract: The authors describe experiments using a genetic algorithm for feature selection in the context of neural network classifiers, specifically, counterpropagation networks. They present the novel techniques used in the application of genetic algorithms. First, the genetic algorithm is configured to use an approximate evaluation in order to reduce significantly the computation required. In particular, though the desired classifiers are counterpropagation networks, they use a nearest-neighbor classifier to evaluate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
91
0
1

Year Published

2002
2002
2020
2020

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 193 publications
(95 citation statements)
references
References 6 publications
0
91
0
1
Order By: Relevance
“…Reference [13] used Genetic Algorithm for feature selection in the context of a neural network classifier. GA was configured to use an approximate evaluation in order to reduce significantly the computation required.…”
Section: Related Literaturementioning
confidence: 99%
“…Reference [13] used Genetic Algorithm for feature selection in the context of a neural network classifier. GA was configured to use an approximate evaluation in order to reduce significantly the computation required.…”
Section: Related Literaturementioning
confidence: 99%
“…Reference [8] used Genetic Algorithm for feature selection in the context of a neural network classifier. GA was configured to use an approximate evaluation in order to reduce significantly the computation required.…”
Section: Related Literaturementioning
confidence: 99%
“…Brill et al [17] used sampling to speed up feature selection. Each individual was evaluated using its nearest neighbor error, an operation which is of O(N 2 M) time complexity, for N instances and M features.…”
Section: Sampling and Data Partitioningmentioning
confidence: 99%