Procedings of the Fourth International Workshop on Knowledge Discovery, Knowledge Management and Decision Support 2013
DOI: 10.2991/.2013.42
|View full text |Cite
|
Sign up to set email alerts
|

Analytical and Experimental Study of Filter Feature Selection Algorithms for High-dimensional Datasets

Abstract: In this paper a new taxonomy for feature selection algorithms created for high-dimensional datasets is proposed. Also, several selectors are described, analyzed and evaluated. It was observed that the Cfs-SFS algorithm reached the best solutions in most of the cases. Nevertheless, its application in very high-dimensional datasets is not recommended due to its computational cost. Cfs-BARS, Cfs-IRU and MRMR algorithms have similar results to those of Cfs-SFS, but in a relatively lesser time. The INTERACT algorit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…The optimal solution is to use the dimension ACI reduction method as a data preprocessing step in reducing the complication and eliminating the redundant and irrelevant features in high-dimensional data. According to Pino and Morell [7], feature selection has been an ever-evolving problem due to the rise of big data in recent years. Feature selection aims to find the smaller number of essential features out of the highdimensional data, containing the best subset features with the least number of dimensions to improve the classification accuracy [8].…”
Section: Literature Review 21 Dimension Reductionmentioning
confidence: 99%
“…The optimal solution is to use the dimension ACI reduction method as a data preprocessing step in reducing the complication and eliminating the redundant and irrelevant features in high-dimensional data. According to Pino and Morell [7], feature selection has been an ever-evolving problem due to the rise of big data in recent years. Feature selection aims to find the smaller number of essential features out of the highdimensional data, containing the best subset features with the least number of dimensions to improve the classification accuracy [8].…”
Section: Literature Review 21 Dimension Reductionmentioning
confidence: 99%
“…It is one of the standard methods for feature selection. The purpose of these techniques is to discard irrelevant or Information gain feature selection, entropy value has been calculated for whole data [12]. It is a supervised, univariate, simple, powerful, symmetrical and entropy-based feature selection algorithm.…”
Section: Information Gain Feature Selectionmentioning
confidence: 99%