2018
DOI: 10.1371/journal.pmed.1002709
|View full text |Cite
|
Sign up to set email alerts
|

Characterising risk of in-hospital mortality following cardiac arrest using machine learning: A retrospective international registry study

Abstract: BackgroundResuscitated cardiac arrest is associated with high mortality; however, the ability to estimate risk of adverse outcomes using existing illness severity scores is limited. Using in-hospital data available within the first 24 hours of admission, we aimed to develop more accurate models of risk prediction using both logistic regression (LR) and machine learning (ML) techniques, with a combination of demographic, physiologic, and biochemical information. Methods and findingsPatient-level data were extra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
71
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 107 publications
(74 citation statements)
references
References 26 publications
1
71
0
2
Order By: Relevance
“…While the complex logic underlying the whole model may be too much for a surrogate model to learn, the logic for one instance or a group of similar instances (e.g., coexpressed genes), hence local, may be simple enough. For example, LIME was used to better understand why some patients were misclassified by a black box model predicting survival after cardiac arrest [59]. A LIME model for a patient that was mispredicted to survive showed that the black box model was too heavily influenced by certain features (e.g., healthy neurologic status, lack of chronic respiratory illness) and did not place sufficient weight on other features that are also important (e.g., elevated creatinine, advanced age).…”
Section: Surrogate Strategies For Interpreting ML Modelsmentioning
confidence: 99%
“…While the complex logic underlying the whole model may be too much for a surrogate model to learn, the logic for one instance or a group of similar instances (e.g., coexpressed genes), hence local, may be simple enough. For example, LIME was used to better understand why some patients were misclassified by a black box model predicting survival after cardiac arrest [59]. A LIME model for a patient that was mispredicted to survive showed that the black box model was too heavily influenced by certain features (e.g., healthy neurologic status, lack of chronic respiratory illness) and did not place sufficient weight on other features that are also important (e.g., elevated creatinine, advanced age).…”
Section: Surrogate Strategies For Interpreting ML Modelsmentioning
confidence: 99%
“…The local interpretable model-agnostic explanation (LIME) [120] generates a local explanation of the model behaviour using a shallow model. It has been even used to explain ML models for the prediction of in-hospital mortality [121]. However, it has also been argued that linear models, rule-based models, and decision trees are not intrinsically interpretable [117].…”
Section: B Interpretabilitymentioning
confidence: 99%
“…For example, LIME was used to better understand why some patients (i.e. instances) were misclassified by a black box model predicting survival after cardiac arrest [51]. A LIME model for a patient that was mis-predicted to survive showed that the black box model was too heavily influenced by certain features (e.g.…”
Section: Surrogate Strategies For Interpreting Machine Learning Modelsmentioning
confidence: 99%