2007
DOI: 10.1007/978-3-540-72523-7_28
|View full text |Cite
|
Sign up to set email alerts
|

Stopping Criteria for Ensemble-Based Feature Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2008
2008
2018
2018

Publication Types

Select...
5
1

Relationship

4
2

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 16 publications
0
8
0
Order By: Relevance
“…Random perturbation of MLP base classifiers is caused by different starting weights combined with bootstrapping, as described in Section 2.1. For non-linear MLP, the number of nodes and epochs is selected as an optimal choice on average over two-class and multi-class datasets using ECOOB [6] (8 nodes with 7 epochs for 2-class and 20 epochs for multi-class). Experiments are repeated twenty times and averaged, and we denote Ensemble and Base classifier test error by ECTE and BCTE respectively.…”
Section: Experimental Evidencementioning
confidence: 99%
See 1 more Smart Citation
“…Random perturbation of MLP base classifiers is caused by different starting weights combined with bootstrapping, as described in Section 2.1. For non-linear MLP, the number of nodes and epochs is selected as an optimal choice on average over two-class and multi-class datasets using ECOOB [6] (8 nodes with 7 epochs for 2-class and 20 epochs for multi-class). Experiments are repeated twenty times and averaged, and we denote Ensemble and Base classifier test error by ECTE and BCTE respectively.…”
Section: Experimental Evidencementioning
confidence: 99%
“…An important issue for RFE is to determine when to stop eliminating features. In Section 2.1, the Ensemble Out-of-Bootstrap (OOB) estimate is proposed for the stopping criterion [6].…”
Section: Introductionmentioning
confidence: 99%
“…A good strategy for improving generalisation performance in MCS is to inject randomness, the most popular strategy being Bootstrapping. An advantage of Bootstrapping is that the Out-of-Bootstrap (OOB) error estimate may be used to tune base classifier parameters, and furthermore, the OOB is a good estimator of when to stop eliminating features [4]. Normally, deciding when to stop eliminating irrelevant features is difficult and requires a validation set or cross-validation techniques.…”
Section: Ensembles Bootstrapping and Ecocmentioning
confidence: 99%
“…The measures proposed in this chapter have been used to select the optimal number of features of an ensemble of MLP classifiers [27]. In [28], a multi-dimensional feature-ranking criterion based on modulus of MLP weights identifies the least relevant features.…”
Section: Discussionmentioning
confidence: 99%