2020 Chinese Automation Congress (CAC) 2020
DOI: 10.1109/cac51589.2020.9326882
|View full text |Cite
|
Sign up to set email alerts
|

Application Research of Ensemble Learning Frameworks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…Ensemble learning [61] • The aim is to yield an optimal model by learning from a combination of numerous kinds of models.…”
Section: Methodsmentioning
confidence: 99%
“…Ensemble learning [61] • The aim is to yield an optimal model by learning from a combination of numerous kinds of models.…”
Section: Methodsmentioning
confidence: 99%
“…Ensemble learning, especially the stacking algorithm framework, is better for multiclassification problems and has higher accuracy. 30 Ensemble learning can use multiple weak learning models to build a strong learning model. In order to determine which algorithm is most suitable for our data classification, it needs to be verified one by one, and the traditional method is very timeconsuming and laborious.…”
Section: ■ Experiments and Algorithmsmentioning
confidence: 99%
“…VOCs classification is a multiclassification problem. Ensemble learning, especially the stacking algorithm framework, is better for multiclassification problems and has higher accuracy . Ensemble learning can use multiple weak learning models to build a strong learning model.…”
Section: Experiments and Algorithmsmentioning
confidence: 99%
“…RF constructs a large number of decision trees and returns the average prediction for each tree; GBDT is an iterative decision tree algorithm comprising multiple decision trees, with the conclusions of all trees being accumulated into the final result; finally, XGBoost is an improvement on GBDT with highly optimised speeds, supporting regularization to reduce the possibility of over-fitting. Ensemble learning algorithms have become increasingly popular in recent years because they often outperform single ML algorithms [45,46]. However, results vary from application to application and even from dataset to dataset, so it is necessary to test and compare the prediction effectiveness of different ML methods with different subjects and datasets.…”
Section: The Optimal ML Algorithmmentioning
confidence: 99%