2015
DOI: 10.1016/j.neucom.2014.10.007
|View full text |Cite
|
Sign up to set email alerts
|

Improving classification rate constrained to imbalanced data between overlapped and non-overlapped regions by hybrid algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
22
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(22 citation statements)
references
References 27 publications
0
22
0
Order By: Relevance
“…Authors in [60] presented a Soft-Hybrid algorithm to improve classification performance. The hybrid algorithm is formed by different modified machine learning techniques whose results were combined at the end of an experimentation phase.…”
Section: Hybrid Algorithms Applied To Imbalanced Classificationmentioning
confidence: 99%
“…Authors in [60] presented a Soft-Hybrid algorithm to improve classification performance. The hybrid algorithm is formed by different modified machine learning techniques whose results were combined at the end of an experimentation phase.…”
Section: Hybrid Algorithms Applied To Imbalanced Classificationmentioning
confidence: 99%
“…STEP 5: Conducting selection and crossover operations for corresponding ψ mutation individuals according to Formulas (6) and (7), generating ϕ progeny individuals and calculating their fitness function values and recording the iterative points that accord with the accuracy of the problem. STEP 6: Consisting of the ψ mutation individuals and the ϕ progeny individuals generated by the selection and crossover operations into a new ψ + ϕ individuals.…”
Section: Implementation Of Differential Evolution Integrated Algorithmentioning
confidence: 99%
“…For the processing of imbalanced data, there are some mature algorithms such as sampling methods [6,7], cost-sensitive algorithms [8,9] and one-class classification [8,9]. However, there are some problems in that the processing of imbalanced data is mainly confined to the level of computer algorithm analysis and big data analysis, and lacks the function of efficient integration, which can result in privacy or data breaches, resulting in significant losses to data owners.…”
Section: Introductionmentioning
confidence: 99%
“…49 The main contributions of this paper with respect to previous studies are 50 as following: 51 • We propose to enhance the OVO scheme for multi-class imbalanced 52 data by using ensemble techniques for each sub-problem. 53 • We show, how to extend the area of applicability of binary imbalanced 54 ensemble classifiers to handling far more challenging multi-class imbal- 55 anced scenarios. The rest of this paper is organized as follows.…”
mentioning
confidence: 98%
“…The learning process turns to minimize the cost errors instead of maximization of accuracy rate [63].• Ensemble level: these solutions combine the efficient ensemble learning 120 solutions[62] with one of the three previously mentioned strategies in 121 order to create a balanced training sets for base classifier and at the 122 same time introduce diversity into the pool of base learners. Special attention should be paid to recent combination of intelligent and directed data-level approaches with Bagging solution [5] or randomized oversampling[15], hybrid combination of algorithm-level methods[55] and cost-sensitive pruning for decision tree ensembles[36].Due to the advantage of the data level solutions (as pointed out by a recent tutorial on data preprocessing [29]) we focus on such methods in this RUS [51] is the basic under-sampling, which randomly removes the majority class instances to balance the class distribution. This approach is efficient for dealing with class imbalance problems, since most of the majority class 133 instances are redundant.…”
mentioning
confidence: 99%