2002
DOI: 10.1007/3-540-36079-4_19
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Feature Selection Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2005
2005
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…To learn a performance‐influence model, we use a domain independent combination of multivariate regression and forward feature selection : we use multivariate regression to determine the coefficients of the different terms of a given model (eg, 3.01 for term π 1 ) and forward feature selection to select the terms that are iteratively added to the model . The general idea of the learning procedure is to start with a simple model and to iteratively expand it to a more complex but also more accurate model.…”
Section: Performance‐influence Modelsmentioning
confidence: 99%
“…To learn a performance‐influence model, we use a domain independent combination of multivariate regression and forward feature selection : we use multivariate regression to determine the coefficients of the different terms of a given model (eg, 3.01 for term π 1 ) and forward feature selection to select the terms that are iteratively added to the model . The general idea of the learning procedure is to start with a simple model and to iteratively expand it to a more complex but also more accurate model.…”
Section: Performance‐influence Modelsmentioning
confidence: 99%
“…They efficiently traverse the space of subsets, by adding and deleting basis functions and use an evaluation measure that directs the search into areas of increased performance. Considering [7,8,9], in order to characterize a heuristic search problem one must define the following: 1) initial state of the search; 2) available state-transition operators; 3) search strategy; 4) evaluation measure; 5) termination condition. In the subset selection approach for polynomial regression, typically the initial states are models that correspond to the empty subset, the subset with only the intercept term in it, full subset of all the defined basis functions, or a randomly chosen subset; the typical state-transition operators are addition and deletion of a basis function; the typical search strategy is the hill climbing [2,3,7] which in combination with the empty subset initial state and the addition operator becomes SFS but in combination with the full subset initial state and the deletion operator becomes SBS; the classical evaluation measures are the statistical significance tests [1], however, currently two other strategies predominate: employment of complexity penalization criteria (e.g.…”
Section: Subset Selectionmentioning
confidence: 99%
“…The sequential floating forward selection ( SFFS ) developed by Pudil et al (1994) was chosen as the second method, because it needs only one parameter to be specified and has been shown in several evaluation studies to be as good as or better than other selection techniques (Reunanen 2003; Molina et al 2002).…”
mentioning
confidence: 99%