2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) 2021
DOI: 10.1109/icmla52953.2021.00087
|View full text |Cite
|
Sign up to set email alerts
|

Feature Subset Selection based on Redundancy Maximized Clusters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…There are numerous sorts of feature selection/extraction techniques available to identify those features. Feature selection is a process of selecting relevant, important features F S and removing irrelevant features from a set of feature F. Relevant features selection and eliminating irrelevant features will reduce time and accelerate the classification performance [2], [3], [34], [35].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…There are numerous sorts of feature selection/extraction techniques available to identify those features. Feature selection is a process of selecting relevant, important features F S and removing irrelevant features from a set of feature F. Relevant features selection and eliminating irrelevant features will reduce time and accelerate the classification performance [2], [3], [34], [35].…”
Section: Introductionmentioning
confidence: 99%
“…distance, consistency, dependency, correlation and mutual information (MI) [6], [9]. Among these measures, MI is more popular than others because of its ability to capture the non-linear and linear relation between features in the dataset and it can be used with categorical as well as numerical values [6], [9], [10], [34], [35]. Several MI based methods have been proposed over time.…”
Section: Introductionmentioning
confidence: 99%