2017
DOI: 10.1007/s11042-017-5480-5
|View full text |Cite
|
Sign up to set email alerts
|

Multi-classifier ensemble based on dynamic weights

Abstract: In this study, a novel multi-classifier ensemble method based on dynamic weights is proposed to reduce the interference of unreliable decision information and improve the accuracy of fusion decision. The algorithm defines decision credibility to describe the realtime importance of the classifier to the current target, combines this credibility with the reliability calculated by the classifier on the training data set and dynamically assigns the fusion weight to the classifier. Compared with other methods, the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 48 publications
0
7
0
Order By: Relevance
“…Lin et al [10] proposes focal loss by reshaping the crossentropy loss in order to reduce the impact of those easily classified samples and majority classes on the loss during the training process. Rather than training a single model, model-level methods normally prepare a pool of models and fuse them to make the final decisions [14,15,16,17]. The contributions of different models to one final decision are normally weighted by each model's reliability, the model confidence on each testing sample or both.…”
Section: Class Imbalancementioning
confidence: 99%
See 4 more Smart Citations
“…Lin et al [10] proposes focal loss by reshaping the crossentropy loss in order to reduce the impact of those easily classified samples and majority classes on the loss during the training process. Rather than training a single model, model-level methods normally prepare a pool of models and fuse them to make the final decisions [14,15,16,17]. The contributions of different models to one final decision are normally weighted by each model's reliability, the model confidence on each testing sample or both.…”
Section: Class Imbalancementioning
confidence: 99%
“…Finally they fuse the decisions of the filtered classifiers as final decisions. Ren et al [14] determines the classifier reliability by fuzzy set theory, and combines the decision credibility of each testing sample to make the decisions.…”
Section: Class Imbalancementioning
confidence: 99%
See 3 more Smart Citations