2023
DOI: 10.1007/978-981-19-7663-6_61
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Stroke Risk Prediction Using Machine Learning Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…This is due to the limitation of LR does not perform well when the relationship between the input features and the target feature is highly non-linear, or when there are complex interactions between the features (Mordensky et al, 2023). As a part of future work, the study will consider dealing the combination of complexity features by implementing more analysis on ML such as Support Vector Machines (Mayes et al, 2023) and Naïve Bayes (Sawhney et al, 2023), k-Nearest Neighbor (Singh et al, 2022) and Random Forest (Zafeiropoulos et al, 2023) in solving the problem of e-ticketing system.…”
Section: Discussionmentioning
confidence: 99%
“…This is due to the limitation of LR does not perform well when the relationship between the input features and the target feature is highly non-linear, or when there are complex interactions between the features (Mordensky et al, 2023). As a part of future work, the study will consider dealing the combination of complexity features by implementing more analysis on ML such as Support Vector Machines (Mayes et al, 2023) and Naïve Bayes (Sawhney et al, 2023), k-Nearest Neighbor (Singh et al, 2022) and Random Forest (Zafeiropoulos et al, 2023) in solving the problem of e-ticketing system.…”
Section: Discussionmentioning
confidence: 99%
“…The final performance is reported on the held-out test set in terms of Area Under the Receiver Operating Characteristic Curve (AUROC) [ 28 ], Area Under the Precision-Recall Curve (AUPRC) [ 29 ], and balanced accuracy with confidence intervals computed using bootstrapping with 1000 iterations. While there are a variety of metrics that can be reported for predictive models (e.g., precision, recall, specificity) [ 30 , 31 ], our choice of AUROC and AUPRC was driven by their ability to summarise the trade-off between commonly reported metrics at various thresholds. For instance, the AUROC metric quantifies the trade-off between specificity and sensitivity at various thresholds [ 32 ], while AUPRC summarizes the trade-off between precision and recall at various thresholds [ 29 ].…”
Section: Materials and Methodsmentioning
confidence: 99%
“…After XGBoost had been tested on the complete dataset, the idea of gain in XG was employed to determine each attribute's importance. XGBoost [55], which stan eXtreme Gradient Boosting, is a popular machine learning algorithm known for i ciency and effectiveness in handling large feature sets. It is commonly used for f selection tasks because of its ability to assess the importance of features in a datase…”
Section: Classification Of Relevant Features Using Xgboostmentioning
confidence: 99%
“…The recall is determined by dividing the number of TPs by the true positive examples in the dataset (TP + FP). The majority of the positive examples in the dataset, or all of the highly engaged students, are likely correctly identified by the model, according to a high recall score [55].…”
Section: Performance Matricesmentioning
confidence: 99%