2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) 2018
DOI: 10.1109/fuzz-ieee.2018.8491440
|View full text |Cite
|
Sign up to set email alerts
|

A first approach towards the usage of classifiers’ performance to create fuzzy measures for ensembles of classifiers: a case study on highly imbalanced datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 25 publications
0
6
0
Order By: Relevance
“…Some of the literatures were on HIB data and the rest were on HIMC data. The related HIB data was from the data mining domain, and this data was grouped based on these algorithms: SR 0-1 LOSS [15], F-BFR [156], T-BFS [157], K-SUB [64], PSU [102], C4.5N [158], DWCE [159], ECSM [72], GP-COACH [160], Fuzzy [161], [162], OSPREY [163], Chi Method, CNN [164], WL-Norm SVM [165], EAIS + Fuzzy [166], GA + Fuzzy [167], US + Ensemble (Data Hardness) [15], Clustering + WS [66], DBE-DCR [73], EUBoost [168], GA-FS-GL [169], GSVM-RU [170], GA-GL+FRBC [171], K-Means + HFS [172], REPMAC k-Means +SVM + DT [173], SwitchingNED [70], SVM-US [174], B-BFS [67].…”
Section: A Related Study On Himc Data Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…Some of the literatures were on HIB data and the rest were on HIMC data. The related HIB data was from the data mining domain, and this data was grouped based on these algorithms: SR 0-1 LOSS [15], F-BFR [156], T-BFS [157], K-SUB [64], PSU [102], C4.5N [158], DWCE [159], ECSM [72], GP-COACH [160], Fuzzy [161], [162], OSPREY [163], Chi Method, CNN [164], WL-Norm SVM [165], EAIS + Fuzzy [166], GA + Fuzzy [167], US + Ensemble (Data Hardness) [15], Clustering + WS [66], DBE-DCR [73], EUBoost [168], GA-FS-GL [169], GSVM-RU [170], GA-GL+FRBC [171], K-Means + HFS [172], REPMAC k-Means +SVM + DT [173], SwitchingNED [70], SVM-US [174], B-BFS [67].…”
Section: A Related Study On Himc Data Frameworkmentioning
confidence: 99%
“…Nonetheless, the only research using DL strategy focused on data mining domain. Data Mining C4.5N [158], DWCE [159], ECSM [72], GP-COACH [160], Fuzzy [161], [162], OSPREY [163], Chi Method, CNN [164], WL-Norm SVM [165] Data Mining EAIS + Fuzzy [166], GA + Fuzzy [167], US + Ensemble (Data Hardness) [15], Clustering + WS [66], DBE-DCR [73], EUBoost [168], GA-FS-GL [169], GSVM-RU [170], GA-GL+FRBC [171], K-Means + HFS [172], REPMAC k-Means +SVM + DT [173], SwitchingNED [70], SVM-US [174], B-BFS [67] Data Mining NRA [175] Fraud Detection BERT [19] Malware Detection Ensemble + RF [68] Disease Prediction SSFS [176] Phishing Detection CNN [177], ECDL [178] Medical Imaging DS + ST [179] Hospital Admission…”
Section: A Related Study On Himc Data Frameworkmentioning
confidence: 99%
“…1) Coalition-based performance measure: The coalitionbased performance (CPM) fuzzy measure learning algorithm was first given in [28] and later in [29]. Basically, the accuracy of every classifier and any combination of classifiers is collected (estimated in the training set).…”
Section: B Unsupervised Learning Of the Fuzzy Measurementioning
confidence: 99%
“…2) Uriz FM: Uriz et al [7] proposed a method which learns the FM from the classification accuracies of individual classifiers. This method shares key aspects of the motivation in respect to using the performance of individual classifiers within an ensemble classification framework with the approach put forward in this paper, nevertheless follows a different approach in the generation of the actual FM as detailed in Section III.…”
Section: A Fuzzy Measuresmentioning
confidence: 99%
“…In this paper, we develop one specific instance of an a priori FM: one which captures the quality of individual classifiers (and their combinations) in order to then enable a fusionbased ensemble classifier. While working on this paper, the authors became aware of recent work by Uriz et al [7] which introduces a FM based on the same principle (i.e. a FM based on sub-classifier performance) in the context of imbalanced classification problems and traditional FIs.…”
Section: Introductionmentioning
confidence: 99%