2011
DOI: 10.1007/978-3-642-25832-9_14
|View full text |Cite
|
Sign up to set email alerts
|

Sequential Feature Selection for Classification

Abstract: Abstract. In most real-world information processing problems, data is not a free resource; its acquisition is rather time-consuming and/or expensive. We investigate how these two factors can be included in supervised classification tasks by deriving classification as a sequential decision process and making it accessible to Reinforcement Learning. Our method performs a sequential feature selection that learns which features are most informative at each timestep, choosing the next feature depending on the alrea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
45
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(45 citation statements)
references
References 8 publications
0
45
0
Order By: Relevance
“…A sequential backward selection [11] strategy was applied to carefully select a small group of significant features from X . Then, an inner SVM was wrapped into the feature selection framework to evaluate the predictive accuracy for candidate subset of features using a leave-one-out cross validation.…”
Section: Methodsmentioning
confidence: 99%
“…A sequential backward selection [11] strategy was applied to carefully select a small group of significant features from X . Then, an inner SVM was wrapped into the feature selection framework to evaluate the predictive accuracy for candidate subset of features using a leave-one-out cross validation.…”
Section: Methodsmentioning
confidence: 99%
“…The most similar Reinforcement Learning works are the paper by Ji and Carin (2007) and the (still unpublished) paper by Rückstieß et al (2011) which proposes MDP models for cost-sensitive classification. Both of these papers have formalizations that are similar to ours, yet concentrate on cost-sensitive problems.…”
Section: Cost Sensitive Classificationmentioning
confidence: 99%
“…After analyzing the correlations of the variables in the dataset and adding the dichotomic variables for the next analysis, we proceeded to select the variables that influence the best in diagnosing the Metabolic Syndrome to reduce the dimensions of the dataset to a subset. For this purpose, we used Sequential Feature Selection searches for a subset of the features in the full model [47] with comparative predictive power being the variables described in Table 6 in conjunction with the dichotomous variables described in Figure 7.…”
Section: Discussionmentioning
confidence: 99%