1998
DOI: 10.1016/s0098-1354(97)00227-5
|View full text |Cite
|
Sign up to set email alerts
|

Information theoretic subset selection for neural network models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0
1

Year Published

2003
2003
2019
2019

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 53 publications
(34 citation statements)
references
References 16 publications
0
32
0
1
Order By: Relevance
“…Extensive tests including novel combinations of ranking and selection methods will be published elsewhere. A very simple dataset used by Shridhar et al [28] contains 12 patterns with 4 features (Tab. 2).…”
Section: Illustrative Results On Synthetic Datamentioning
confidence: 99%
See 1 more Smart Citation
“…Extensive tests including novel combinations of ranking and selection methods will be published elsewhere. A very simple dataset used by Shridhar et al [28] contains 12 patterns with 4 features (Tab. 2).…”
Section: Illustrative Results On Synthetic Datamentioning
confidence: 99%
“…For the Shridhar dataset almost all ranking and redundancy shifting methods worked correctly. If the relevancy threshold is set to 0.05, features marked in bold (irrelevant) Shridhar Corral Gauss8 dataset [28] dataset [29] dataset [21,24] Table 3. Ordering of features after feature selection for 3 synthetic datasets.…”
Section: Illustrative Results On Synthetic Datamentioning
confidence: 99%
“…We selected this software for our pattern recognition system because of its extensive and sophisticated proven capabilities at solving difficult data mining problems [3], [26]. The set of weight values and number of nodes for an ANN is called the ANN architecture.…”
Section: Pattern Recognition Systemmentioning
confidence: 99%
“…Another ratio IGn(X j ) = IG(X j )/I(C), called also "an asymmetric dependency coefficient" is advocated in [6]. Mutual information between feature f and classes:…”
Section: Information Theory and Other Filtersmentioning
confidence: 99%
“…Due to the space restrictions we report here only results obtained with information gain (IGn) ranking [6] and Battiti selection method (BA) [7] on two datasets [10]: Monk-1 artificial data and hypothyroid problems. Monk-1 data has only 6 features of which 5, 1, 2 are used to form a rule determining the class.…”
Section: Numerical Experimentsmentioning
confidence: 99%