2020
DOI: 10.3233/jifs-200600
|View full text |Cite
|
Sign up to set email alerts
|

A cross-entropy based stacking method in ensemble learning

Abstract: Stacking is one of the major types of ensemble learning techniques in which a set of base classifiers contributes their outputs to the meta-level classifier, and the meta-level classifier combines them so as to produce more accurate classifications. In this paper, we propose a new stacking algorithm that defines the cross-entropy as the loss function for the classification problem. The training process is conducted by using a neural network with the stochastic gradient descent technique. One major characterist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 31 publications
0
7
0
1
Order By: Relevance
“…It is observable that in almost all the cases an ensemble is more effective than an individual classifier-based approach. 27 In the following we review several ensemble approaches in the domain of HAR. Peng et al 28 proposed a classifier-level fusion method to recognize complex activities by using acceleration, vital sign, and location data.…”
Section: Related Workmentioning
confidence: 99%
“…It is observable that in almost all the cases an ensemble is more effective than an individual classifier-based approach. 27 In the following we review several ensemble approaches in the domain of HAR. Peng et al 28 proposed a classifier-level fusion method to recognize complex activities by using acceleration, vital sign, and location data.…”
Section: Related Workmentioning
confidence: 99%
“…Ensemble methods, such as Stacking, are strategies that combine multiple models to improve the prediction generalization in a given task. Such methods have shown promise in ATC [Džeroski andŽenko 2004, Ding andWu 2020], enjoying high effectiveness and computational costs that depend on the selected learning methods of the ensemble. Among the possible ensemble strategies, Stacking has the characteristic of using a meta-layer capable of combining the prediction outputs of different heterogeneous individual models.…”
Section: Introductionmentioning
confidence: 99%
“…However, the benefits of ensemble techniques against a robust classifier are not always clear [Yan-Shi Dong and Ke-Song Han 2004], in part, due to the excellent generalization power of the best classifiers. In fact, previous ensemble works primarily focus on improving the overall classification effectiveness using the results of traditional classification algorithms [Campos et al 2017, Ding andWu 2020], paying little or no attention to practical issues such as the execution time or which combination of efficient base algorithms can bring effective results at a lower cost.…”
Section: Introductionmentioning
confidence: 99%
“…However, the benefits of ensemble techniques against a strong classifier are not always clear (Yan-Shi Dong and Ke-Song Han, 2004), in part, due to the excellent generalization power of the best classifiers. In fact, previous ensemble works mostly focus on improving the overall classification effectiveness using the results of traditional classification algorithms (Campos et al, 2017;Ding and Wu, 2020), paying little or no attention to practical issues such as the execution time or which combination of efficient base algorithms can bring effective results at a lower cost. Accordingly, our first contribution in this paper is a thorough study of the cost-effectiveness tradeoff of stacking techniques for text classification tasks.…”
Section: Introductionmentioning
confidence: 99%
“…Ensemble approaches, such as stacking, which combine the outputs of several base classification models to form an integrated output, have also been shown to excel in ATC (Džeroski and Ženko, 2004;Ding and Wu, 2020), enjoying high effectiveness and computational costs that depend on the selected learning methods of the ensemble. They are motivated by the fact that distinct learning models or text representations may complement each other, uncovering specific structures that underlie the input/output relationship of the data.…”
Section: Introductionmentioning
confidence: 99%