Purpose: Due to the increasing demand for intensive care unit (ICU) treatment, and to improve quality and efficiency of care, there is a need for adequate and efficient clinical decision-making. The advancement of artificial intelligence (AI) technologies has resulted in the development of prediction models, which might aid clinical decision-making. This systematic review seeks to give a contemporary overview of the current maturity of AI in the ICU, the research methods behind these studies, and the risk of bias in these studies.Methods: A systematic search was conducted in Embase, Medline, Web of Science Core Collection and Cochrane Central Register of Controlled Trials databases to identify eligible studies. Studies using AI to analyze ICU data were considered eligible. Specifically, the study design, study aim, dataset size, level of validation, level of readiness, and the outcomes of clinical trials were extracted. Risk of bias in individual studies was evaluated by the Prediction model Risk Of Bias ASsessment Tool (PROBAST).
Results:Out of 6455 studies identified through literature search, 494 were included. The most common study design was retrospective [476 studies (96.4% of all studies)] followed by prospective observational [8 (1.6%)] and clinical [10 (2%)] trials. 378 (80.9%) retrospective studies were classified as high risk of bias. No studies were identified that reported on the outcome evaluation of an AI model integrated in routine clinical practice.
Conclusion:The vast majority of developed ICU-AI models remain within the testing and prototyping environment; only a handful were actually evaluated in clinical practice. A uniform and structured approach can support the development, safe delivery, and implementation of AI to determine clinical benefit in the ICU.
ObjectiveAlthough the role of artificial intelligence (AI) in medicine is increasingly studied, most patients do not benefit because the majority of AI models remain in the testing and prototyping environment. The development and implementation trajectory of clinical AI models are complex and a structured overview is missing. We therefore propose a step-by-step overview to enhance clinicians’ understanding and to promote quality of medical AI research.MethodsWe summarised key elements (such as current guidelines, challenges, regulatory documents and good practices) that are needed to develop and safely implement AI in medicine.ConclusionThis overview complements other frameworks in a way that it is accessible to stakeholders without prior AI knowledge and as such provides a step-by-step approach incorporating all the key elements and current guidelines that are essential for implementation, and can thereby help to move AI from bytes to bedside.
Background: A significant proportion of surgical inpatients is often admitted longer than necessary. Early identification of patients who do not need care that is strictly provided within hospitals would allow timely discharge of patients to a postoperative nursing home for further recovery. We aimed to develop a model to predict whether a patient needs hospital-specific interventional care beyond the second postoperative day. Methods: This study included all adult patients discharged from surgical care in the surgical oncology department from June 2017 to February 2020. The primary outcome was to predict whether a patient still needs hospital-specific interventional care beyond the second postoperative day. Hospital-specific care was defined as unplanned reoperations, radiological interventions, and intravenous antibiotics administration. Different analytical methods were compared with respect to the area under the receiveroperating characteristics curve, sensitivity, specificity, positive predictive value, and negative predictive value. Results: Each model was trained on 1,174 episodes. In total, 847 (50.5%) patients required an intervention during postoperative admission. A random forest model performed best with an area under the receiveroperating characteristics curve of 0.88 (95% confidence interval 0.83e0.93), sensitivity of 79.1% (95% confidence interval 0.67e0.92), specificity of 80.0% (0.73e0.87), positive predictive value of 57.6% (0.45 e0.70) and negative predictive value of 91.7% (0.87e0.97). Conclusion: This proof-of-concept study found that a random forest model could successfully predict whether a patient could be safely discharged to a nursing home and does not need hospital care anymore. Such a model could aid hospitals in addressing capacity challenges and improve patient flow, allowing for timely surgical care.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.