2022
DOI: 10.1001/jamanetworkopen.2022.12930
|View full text |Cite|
|
Sign up to set email alerts
|

Development and Validation of an Explainable Machine Learning Model for Major Complications After Cytoreductive Surgery

Abstract: Key Points Question Can machine learning provide superior risk prediction compared with the current statistical methods for patients undergoing cytoreductive surgery? Findings In this prognostic study, an optimized machine learning model demonstrated superior capability of predicting individual-level risk of major complications after cytoreductive surgery than traditional methods. Cohort-level risk prediction allowed unbiased categorization of patients into… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(14 citation statements)
references
References 23 publications
0
14
0
Order By: Relevance
“…None of the articles described equity analyses in which model performance was stratified by sex or race. Twenty-three articles9,19,21,23–25,27,29,30,32–34,36,37,39,41,42,44–46,48–50 (63.9%) reported precision metrics (area under the precision-recall curve, positive predictive value, or F1 score). Twenty-five articles9,20,21,23–28,31,33,34,36,38,40,42–50 (69.4%) included explainability mechanisms to convey the relative importance of input features in determining outputs.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…None of the articles described equity analyses in which model performance was stratified by sex or race. Twenty-three articles9,19,21,23–25,27,29,30,32–34,36,37,39,41,42,44–46,48–50 (63.9%) reported precision metrics (area under the precision-recall curve, positive predictive value, or F1 score). Twenty-five articles9,20,21,23–28,31,33,34,36,38,40,42–50 (69.4%) included explainability mechanisms to convey the relative importance of input features in determining outputs.…”
Section: Resultsmentioning
confidence: 99%
“…Twenty-three articles9,19,21,23–25,27,29,30,32–34,36,37,39,41,42,44–46,48–50 (63.9%) reported precision metrics (area under the precision-recall curve, positive predictive value, or F1 score). Twenty-five articles9,20,21,23–28,31,33,34,36,38,40,42–50 (69.4%) included explainability mechanisms to convey the relative importance of input features in determining outputs. Thirteen articles9,16,17,20,25,27,29,30,35,38,45,46,50 (36.1%) presented a framework that could be used for clinical implementation; none of the articles assessed the efficacy of clinical implementation.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The shapley additive explanations (SHAP) is an artificial intelligence strategy based on game theory, which provides a unified method to interpreting machine learning models (20)(21)(22). It can be used to unlock the intrinsic importance of features for the prediction, such as, treatment decision-making.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, risk assessment from objective evidence is particularly important for clinical decision-making. The artificial intelligence annotated model is widely accepted to provide assistance for clinical decision-making ( 20 , 21 ). And there remains an opportunity to augment decision-making through artificial intelligence annotated tools for patients with advanced GC.…”
Section: Introductionmentioning
confidence: 99%