2007
DOI: 10.1007/s10844-007-0037-0
|View full text |Cite
|
Sign up to set email alerts
|

Consistency measures for feature selection

Abstract: Abstract.The use of feature selection can improve accuracy, efficiency, applicability and understandability of a learning process. For this reason, many methods of automatic feature selection have been developed. Some of these methods are based on the search of the features that allows the data set to be considered consistent. In a search problem we usually evaluate the search states, in the case of feature selection we measure the possible feature sets. This paper reviews the state of the art of consistency b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
36
0
2

Year Published

2013
2013
2020
2020

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 84 publications
(38 citation statements)
references
References 28 publications
0
36
0
2
Order By: Relevance
“…there should be no conflicts between the objects described by similar features [27]. An dataset, described by a subset of features, is considered inconsistent if there exists at least two objects belonging to it such that they are similar except their class labels.…”
Section: Feature Selection Algorithmsmentioning
confidence: 99%
“…there should be no conflicts between the objects described by similar features [27]. An dataset, described by a subset of features, is considered inconsistent if there exists at least two objects belonging to it such that they are similar except their class labels.…”
Section: Feature Selection Algorithmsmentioning
confidence: 99%
“…A typical approach to solve this problem consists in using a decision tree algorithm [4], applied to the original data set or to a data set reduced by a feature (or attribute) selection method [5]. Feature selection has become increasingly frequent in classification or regression applications in genomics, health sciences, economics, finance, among others (see, e.g., [6], [7]). Feature selection is an independent process whose main objective is to reduce the dimension of the data set (the "number of columns") in order to perform a more efficient and more easily interpretable classification, which is also more accurate because the noise introduced by irrelevant features has been removed.…”
Section: Introductionmentioning
confidence: 99%
“…The best possible subset is selected when the search stops. According to [8], consistency and correlation [9,10] are the best evaluation measures that decrease efficiently irrelevance and redundancy. A Consistency measure evaluates the distance of a feature subset from the consistent class label.…”
Section: Filter Feature Selection Methodsmentioning
confidence: 99%