To evaluate the performance of machine learning (ML) models and to compare it with logistic regression (LR) technique in predicting cognitive impairment related to post intensive care syndrome (PICS-CI). We conducted a prospective observational study of ICU patients at two tertiary hospitals. A cohort of 2079 patients was screened, and finally 481 patients were included. Seven different ML models were considered, decision tree (DT), random forest (RF), XGBoost, neural network (NN), naïve bayes (NB), and support vector machine (SVM), and compared with logistic regression (LR). Discriminative ability was evaluated by area under the receiver operating characteristic curve (AUC), calibration belt plots, and Hosmer–Lemeshow test was used to assess calibration. Decision curve analysis was performed to quantify clinical utility. Duration of delirium, poor Richards–Campbell sleep questionnaire (RCSQ) score, advanced age, and sepsis were the most frequent and important candidates risk factors for PICS-CI. All ML models showed good performance (AUC range: 0.822–0.906). NN model had the highest AUC (0.906 [95% CI 0.857–0.955]), which was slightly higher than, but not significantly different from that of LR (0.898 [95% CI 0.847–0.949]) (P > 0.05, Delong test). Given the overfitting and complexity of some ML models, the LR model was then used to develop a web-based risk calculator to aid decision-making (https://model871010.shinyapps.io/dynnomapp/). In a low dimensional data, LR may yield as good performance as other complex ML models to predict cognitive impairment after ICU hospitalization.
Objective: The aim of this study was to explore whether machine learning (ML) algorithms are more accurate than traditional statistical models in predicting cognitive impairment related to post intensive care syndrome (PICS-CI).
Research Methodology: We conducted a prospective observational study of ICU patients at two tertiary hospitals. A cohort of 2079 patients was screened, and finally 481 patients were included. Six different ML models were considered, decision tree (DT), random forest (RF), XGBoost, neural network (NN), naïve Bayes (NB), and support vector machine (SVM), and compared with logistic regression (LR). Discriminative ability was evaluated by area under the receiver operating characteristic curve (AUC), calibration belt plots, and Hosmer-Lemeshow test was used to assess calibration. Decision curve analysis was performed to quantify clinical utility.
Results: All ML models showed good performance (AUC range: 0.822–0.906). NN model had the highest AUC (0.906 [95%CI: 0.857–0.955]), which was slightly higher than, but not significantly different from that of LR (0.898 [95%CI: 0.847–0.949]) (P>0.05, Delong test). Except for DT, XBGoost, and NB models, the other ML models demonstrated good agreement between the predicted and observed probability of PICS-CI (Hosmer and Lemeshow Test, P>0.05). Decision curve analysis showed higher net benefit of most of the ML models. Given the overfitting and complexity of some ML models, the LR model was then used to develop a web-based risk calculator to aid decision-making (https://model871010.shinyapps.io/dynnomapp/).
Conclusion: In a low dimensional data, logistic regression may yield as good performance as ML models to predict cognitive impairment after ICU hospitalization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.