2018
DOI: 10.1051/matecconf/201815006006
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Study of Feature Selection Techniques for Bat Algorithm in Various Applications

Abstract: Feature selection is a process to select the best feature among huge number of features in dataset, However, the problem in feature selection is to select a subset that give the better performs under some classifier. In producing better classification result, feature selection been applied in many of the classification works as part of preprocessing step; where only a subset of feature been used rather than the whole features from a particular dataset. This procedure not only can reduce the irrelevant features… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…Feature selection approaches can be divided into two categories: The search space is decomposed into four classes; they are: "exhaustive", "random", "heuristic", "meta-heuristic". And the other is the strategybased techniques which are decomposed into filter and wrapper feature selection approach [19][20][21][22][23].…”
Section: Feature Selection Methods Via Metaheuristicsmentioning
confidence: 99%
“…Feature selection approaches can be divided into two categories: The search space is decomposed into four classes; they are: "exhaustive", "random", "heuristic", "meta-heuristic". And the other is the strategybased techniques which are decomposed into filter and wrapper feature selection approach [19][20][21][22][23].…”
Section: Feature Selection Methods Via Metaheuristicsmentioning
confidence: 99%
“…Feature selection is one of the stage for preprocessing the data through the identification and selection of a subset of F features from the original data of D features (F < D) without any transformation [57]. In the domain of supervised learning, feature Selection attempts to maximize the accuracy of the classifier, minimizing the related measurement costs by reducing irrelevant and possibly redundant features [5,40,45,26,46,68,35,37,50,1]. Feature selection reduces the complexity and the associated computational cost and improves the probability that a solution will be comprehensible and realistic.…”
Section: Feature Selectionmentioning
confidence: 99%
“…Feature Selection methods reduce the dimensionality of datasets by removing features that are considered as irrelevant or noisy for the learning task. This topic has received a lot of attention in machine learning and pattern recognition communities [4,46,68,35,37,50,1]. In any dataset, data can be seen as a collection of data points called instances.…”
Section: Introductionmentioning
confidence: 99%
“…Feature selection is one of the stage for preprocessing the data through the identification and selection of a subset of F features from the original data of D features (F < D) without any transformation [57]. In the domain of supervised learning, feature Selection attempts to maximize the accuracy of the classifier, minimizing the related measurement costs by reducing irrelevant and possibly redundant features [5,40,45,26,46,68,35,37,50,1]. Feature selection reduces the complexity and the associated computational cost and improves the probability that a solution will be comprehensible and realistic.…”
Section: Feature Selectionmentioning
confidence: 99%
“…Feature Selection methods reduce the dimensionality of datasets by removing features that are considered as irrelevant or noisy for the learning task. This topic has received a lot of attention in machine learning and pattern recognition communities [4,46,68,35,37,50,1]. In any dataset, data can be seen as a collection of data points called instances.…”
Section: Introductionmentioning
confidence: 99%