2017
DOI: 10.1007/978-3-319-67588-6_1
|View full text |Cite
|
Sign up to set email alerts
|

Advances in Feature Selection for Data and Pattern Recognition: An Introduction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 25 publications
0
13
0
Order By: Relevance
“…Feature selection is one of the stage for preprocessing the data through the identification and selection of a subset of F features from the original data of D features (F < D) without any transformation [57]. In the domain of supervised learning, feature Selection attempts to maximize the accuracy of the classifier, minimizing the related measurement costs by reducing irrelevant and possibly redundant features [5,40,45,26,46,68,35,37,50,1]. Feature selection reduces the complexity and the associated computational cost and improves the probability that a solution will be comprehensible and realistic.…”
Section: Feature Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Feature selection is one of the stage for preprocessing the data through the identification and selection of a subset of F features from the original data of D features (F < D) without any transformation [57]. In the domain of supervised learning, feature Selection attempts to maximize the accuracy of the classifier, minimizing the related measurement costs by reducing irrelevant and possibly redundant features [5,40,45,26,46,68,35,37,50,1]. Feature selection reduces the complexity and the associated computational cost and improves the probability that a solution will be comprehensible and realistic.…”
Section: Feature Selectionmentioning
confidence: 99%
“…Feature Selection methods reduce the dimensionality of datasets by removing features that are considered as irrelevant or noisy for the learning task. This topic has received a lot of attention in machine learning and pattern recognition communities [4,46,68,35,37,50,1]. In any dataset, data can be seen as a collection of data points called instances.…”
Section: Introductionmentioning
confidence: 99%
“…Various strategies can be applied to creating a committee, such as bootstrap resampling of the training data [40,41]. We decided to set up a committee of SVM classifiers trained using different hyperparameter values since the behaviour of the SVM strongly varies with its cost and bandwidth parameters, C and γ [42].…”
Section: Query By Committeementioning
confidence: 99%
“…Feature selection is one of the stage for preprocessing the data through the identification and selection of a subset of F features from the original data of D features (F < D) without any transformation [57]. In the domain of supervised learning, feature Selection attempts to maximize the accuracy of the classifier, minimizing the related measurement costs by reducing irrelevant and possibly redundant features [5,40,45,26,46,68,35,37,50,1]. Feature selection reduces the complexity and the associated computational cost and improves the probability that a solution will be comprehensible and realistic.…”
Section: Feature Selectionmentioning
confidence: 99%
“…Feature Selection methods reduce the dimensionality of datasets by removing features that are considered as irrelevant or noisy for the learning task. This topic has received a lot of attention in machine learning and pattern recognition communities [4,46,68,35,37,50,1]. In any dataset, data can be seen as a collection of data points called instances.…”
Section: Introductionmentioning
confidence: 99%