2019
DOI: 10.3390/e21090866
|View full text |Cite
|
Sign up to set email alerts
|

Does Classifier Fusion Improve the Overall Performance? Numerical Analysis of Data and Fusion Method Characteristics Influencing Classifier Fusion Performance

Abstract: The reliability of complex or safety critical systems is of increasing importance in several application fields. In many cases, decisions evaluating situations or conditions are made. To ensure the high accuracy of these decisions, the assignments from different classifiers can be fused to one final decision to improve the decision performance in terms of given measures like accuracy or false alarm rate. Recent research results show that fusion methods not always outperform individual classifiers trained and o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
28
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(29 citation statements)
references
References 29 publications
0
28
0
1
Order By: Relevance
“…Selain itu, perbedaan struktur BPNN juga dapat mempengaruhi output BPNN sehingga sulit dan memerlukan banyak waktu untuk menentukan satu arsitektur BPNN tunggal yang tepat untuk menyelesaikan sebuah persoalan (Ruta dan Gabrys, 2000). Untuk mengatasi permasalahan tersebut, dikembangkan algoritma neural network ensemble (NNE) dengan cara menggabungkan beberapa BPNN dengan arsitektur yang berbeda namun menggunakan input yang sama (Rothe et al, 2019). Beberapa penelitian juga menunjukkan bahwa penggabungan beberapa neural network terbukti dapat meningkatkan akurasi klasifikasi atau estimasi dibandingkan dengan menggunakan satu neural network tunggal (Zhou et al, 2002;Liang et al, 2014;Sulistyo et al, 2018).…”
Section: Abstrakunclassified
“…Selain itu, perbedaan struktur BPNN juga dapat mempengaruhi output BPNN sehingga sulit dan memerlukan banyak waktu untuk menentukan satu arsitektur BPNN tunggal yang tepat untuk menyelesaikan sebuah persoalan (Ruta dan Gabrys, 2000). Untuk mengatasi permasalahan tersebut, dikembangkan algoritma neural network ensemble (NNE) dengan cara menggabungkan beberapa BPNN dengan arsitektur yang berbeda namun menggunakan input yang sama (Rothe et al, 2019). Beberapa penelitian juga menunjukkan bahwa penggabungan beberapa neural network terbukti dapat meningkatkan akurasi klasifikasi atau estimasi dibandingkan dengan menggunakan satu neural network tunggal (Zhou et al, 2002;Liang et al, 2014;Sulistyo et al, 2018).…”
Section: Abstrakunclassified
“…But not necessarily for all applications and datasets due to the presence of different levels of noise, outliers, nonlinearities, and data redundancy. 10 To deal with this problem, researchers suggested employing an ensemble of classification models to compensate for the weaknesses and boost the strength of each classifier. 11,12 This, however, leads to two important questions: (1) how to select classifiers from the pool of classifiers to keep the information and impose the diversity and (2) how to combine their outputs to make a final decision.…”
Section: Introductionmentioning
confidence: 99%
“…1921 The last one is of interest in this article due to its proven advantages over other combination methods. 10,13,22 For instance, Rothe et al 10 conducted an extensive investigation to analyze the performance of all the fusion methods except for the Dempster–Shafer method. It was concluded that in most cases, although the fusion methods cannot outperform the best individual classifier, they decrease the sensitivity to the outliers.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, consistent with the no free lunch (NFL) theorem 1 [26], no single classifier seems to be capable of ensuring optimal results for all datasets. Data characteristics are known to have an intrinsic relationship with classifier performance [2,8,16,18]. This study focuses on two kinds of data characteristics: class imbalance and training data size.…”
Section: Introductionmentioning
confidence: 99%