2021
DOI: 10.1016/j.asoc.2021.107884
|View full text |Cite
|
Sign up to set email alerts
|

Sample and feature selecting based ensemble learning for imbalanced problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 40 publications
0
7
0
Order By: Relevance
“…In our experiment, the number of model pools ( T ) needs to be set by us. In Reference 75, experiments show that a stable performance can be obtained when the number of bagging series algorithm trees is 30. Therefore, in our experiment, we set the number of model pools ( T = 30).…”
Section: Experiments Results and Discussionmentioning
confidence: 99%
“…In our experiment, the number of model pools ( T ) needs to be set by us. In Reference 75, experiments show that a stable performance can be obtained when the number of bagging series algorithm trees is 30. Therefore, in our experiment, we set the number of model pools ( T = 30).…”
Section: Experiments Results and Discussionmentioning
confidence: 99%
“…As single learning algorithms are usually limited and make errors, ensemble learning can help improve detections by taking advantage of the strengths of different learning algorithms [102]. Further, ensemble learning can easily be coupled with sampling methods, improve detection with a new aggregating strategy, or integrate itself with another ensemble learning for hybrid learning strategies [103].…”
Section: Discussionmentioning
confidence: 99%
“…Literature suggests, choosing the proper evaluation metric, proper selection of re‐sampling strategies, proper use of SMOTE and its variants and also the design of balanced bagging classifiers are proving their effectiveness in this problem domain (Daud et al, 2023; Elreedy & Atiya, 2019; Kim et al, 2019; Wei et al, 2022). It is also studied that, the use of k‐fold cross validation, ensemble strategies, re‐sampling with different ratio; clustering the abundant class in n number of groups (n being the number of cases in n) using medoid as center of the cluster and then training the classifier with the rare class and the mediods only; designing the models to balance the classes using machine learning (ML), soft computing (SC) and deep learning (DL) techniques (Gao et al, 2020; Hongle et al, 2021; Kaisar & Chowdhury, 2022; Saini & Susan, 2022; Sidumo et al, 2022; Wang et al, 2021) are being widely used in this area of research. The use of SC and ML based strategies for generating synthetic samples for improving the classification accuracy has motivated us to develop a hybridized approach of generating synthetic samples.…”
Section: Overviewmentioning
confidence: 99%