2012
DOI: 10.1142/s0218488512500195
|View full text |Cite
|
Sign up to set email alerts
|

Feature Selection and Granularity Learning in Genetic Fuzzy Rule-Based Classification Systems for Highly Imbalanced Data-Sets

Abstract: This paper proposes a Genetic Algorithm for jointly performing a feature selection and granularity learning for Fuzzy Rule-Based Classification Systems in the scenario of highly imbalanced data-sets. We refer to imbalanced data-sets when the class distribution is not uniform, a situation that it is present in many real application areas. The aim of this work is to get more compact models by selecting the adequate variables and adapting the number of fuzzy labels for each problem, improving the interpre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0
1

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(32 citation statements)
references
References 52 publications
0
31
0
1
Order By: Relevance
“…In [90], a genetic procedure for learning the KB in imbalanced datasets, GA-FS+GL, is proposed. In this case, the SMOTE algorithm is again used to balance the training set.…”
Section: Efs and Data-level Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…In [90], a genetic procedure for learning the KB in imbalanced datasets, GA-FS+GL, is proposed. In this case, the SMOTE algorithm is again used to balance the training set.…”
Section: Efs and Data-level Approachesmentioning
confidence: 99%
“…Finally, in [110], three EFS systems for imbalanced classification are compared, which are the GA-FS+GL method described in [90], the GP-COACH-H algorithm presented in [19] and the MOEFS developed in [107]. The authors use 22 datasets to perform this comparison being the MOEFS approach the one with the best performance supported by a Holm test.…”
Section: Efs and Ensemble Learningmentioning
confidence: 99%
“…Several papers, mainly related to MOEFSs, adopt multiobjective evolutionary algorithms for rule selection (Gacto et al 2010) (e.g., from an initial RB heuristically generated) or rule learning (Cococcioni et al 2007), DB tuning (Botta et al 2009), rule learning/selection together with DB learning (in particular, partition granularity and membership function parameters) (Villar et al 2012;Antonelli et al 2009a, b). As regards other formalisms for representing information granules, in the last years, a number of evolutionary-based approaches, mainly based on singleobjective optimization, have been proposed (Castillo and Melin 2012a, b;Sanz et al 2010Sanz et al , 2011Sanz et al , 2013Sanz et al , 2015.…”
Section: Introductionmentioning
confidence: 99%
“…It is well known that a large number of features can degrade the discovery of the borderline areas of the problem [39], either because some of these variables might be redundant or because they do not show a good synergy among them. For this reason, some works on the topic have proposed the use of feature selection for imbalanced datasets in order to overcome this problem [54], and to diminish the effect of overlapping [13]. However, the application of feature selection might be too aggressive and therefore some potential features could be discarded.…”
Section: Introductionmentioning
confidence: 99%