2020
DOI: 10.1016/j.eswa.2019.113160
|View full text |Cite
|
Sign up to set email alerts
|

A-Stacking and A-Bagging: Adaptive versions of ensemble learning algorithms for spoof fingerprint detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
33
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 88 publications
(34 citation statements)
references
References 40 publications
0
33
0
1
Order By: Relevance
“…Individual classifier may not be able to learn more information, while ensemble learning can improve the performance of a single classifier by combining them [ 28 ]. Ensemble learning is one of the most useful strategies to improve generalization performance of prediction model, with a core of training strategy for base classifiers, such as bagging, boosting, and stacking [ 29 ]. Bagging and boosting build the base learners from a single dataset, having an impact on diversity, while stacking learning method uses the multiple classifiers by taking the prediction of the previous level as input variables for the next level [ 30 ].…”
Section: Methodsmentioning
confidence: 99%
“…Individual classifier may not be able to learn more information, while ensemble learning can improve the performance of a single classifier by combining them [ 28 ]. Ensemble learning is one of the most useful strategies to improve generalization performance of prediction model, with a core of training strategy for base classifiers, such as bagging, boosting, and stacking [ 29 ]. Bagging and boosting build the base learners from a single dataset, having an impact on diversity, while stacking learning method uses the multiple classifiers by taking the prediction of the previous level as input variables for the next level [ 30 ].…”
Section: Methodsmentioning
confidence: 99%
“…Stacking technology is a general integration algorithm that integrates advanced learners by using multiple lower-level learners to achieve higher performance (Agarwal and Chowdary, 2020). In general, the K-fold cross-validation method is used to train and test these models and then output the prediction results.…”
Section: Stacking Modelmentioning
confidence: 99%
“…In [9], A-Stacking and A-Bagging, the adaptive versions of ensemble learning approaches are proposed. A-Bagging method has been applied by using the same base learners over numerous subsets of data and the predictions are aggregated by using weighted majority voting.…”
Section: Related Workmentioning
confidence: 99%