2022
DOI: 10.1038/s41598-022-21389-9
|View full text |Cite
|
Sign up to set email alerts
|

A stacking ensemble machine learning model to predict alpha-1 antitrypsin deficiency-associated liver disease clinical outcomes based on UK Biobank data

Abstract: Alpha-1 antitrypsin deficiency associated liver disease (AATD-LD) is a rare genetic disorder and not well-recognized. Predicting the clinical outcomes of AATD-LD and defining patients more likely to progress to advanced liver disease are crucial for better understanding AATD-LD progression and promoting timely medical intervention. We aimed to develop a tailored machine learning (ML) model to predict the disease progression of AATD-LD. This analysis was conducted through a stacking ensemble learning model by c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 38 publications
0
0
0
Order By: Relevance
“…Techniques such as bagging, boosting, and stacking are commonly employed in ensemble learning to enhance model performance by leveraging the strengths of individual models and mitigating their weaknesses [ 62 ]. Meta-ensemble methods, which involve combining multiple ensemble techniques, offer a comprehensive approach to adaptively adjust decision fusion strategies based on the characteristics of each base model, leading to superior predictive capabilities in tasks such as sentiment analysis and disease detection [ 36 , 63 ]. Additionally, the use of feature importance permutation methods and hyperparameter optimization in meta-learning on ensemble models allows for the evaluation of predictor contributions and the fine-tuning of model parameters to achieve optimal performance [ 43 , 63 ].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Techniques such as bagging, boosting, and stacking are commonly employed in ensemble learning to enhance model performance by leveraging the strengths of individual models and mitigating their weaknesses [ 62 ]. Meta-ensemble methods, which involve combining multiple ensemble techniques, offer a comprehensive approach to adaptively adjust decision fusion strategies based on the characteristics of each base model, leading to superior predictive capabilities in tasks such as sentiment analysis and disease detection [ 36 , 63 ]. Additionally, the use of feature importance permutation methods and hyperparameter optimization in meta-learning on ensemble models allows for the evaluation of predictor contributions and the fine-tuning of model parameters to achieve optimal performance [ 43 , 63 ].…”
Section: Related Workmentioning
confidence: 99%
“…Meta-ensemble methods, which involve combining multiple ensemble techniques, offer a comprehensive approach to adaptively adjust decision fusion strategies based on the characteristics of each base model, leading to superior predictive capabilities in tasks such as sentiment analysis and disease detection [ 36 , 63 ]. Additionally, the use of feature importance permutation methods and hyperparameter optimization in meta-learning on ensemble models allows for the evaluation of predictor contributions and the fine-tuning of model parameters to achieve optimal performance [ 43 , 63 ]. Furthermore, meta-learning techniques in ensemble models facilitate the development of sophisticated approaches like stacking, where predictions from multiple base models are combined into a meta-learner model to improve overall accuracy and generalizability [ 57 ].…”
Section: Related Workmentioning
confidence: 99%