Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Since ChatGPT's debut, generative AI technologies have surged in popularity within the AI community. Recognized for their cutting-edge language processing capabilities, these excel in generating human-like conversations, enabling open-ended dialogues with end-users. We consider that the future adoption of generative AI for critical public domain applications transforms the accountability relationship. Previously characterized by the relationship between an actor and a forum, the introduction of generative systems complicates accountability dynamics as the initial interaction shifts from the actor to an advanced generative system. We conceptualise a dual-phase accountability relationship involving the actor, the forum, and the generative AI as a foundational approach to understanding public sector accountability in the context of these technologies. Focusing on integrating generative AI for assisting healthcare triaging, we identify potential challenges introduced for maintaining effective accountability relationships, highlighting concerns that these technologies relegate actors to a secondary phase of accountability and creates a disconnect between government actors and citizens. We suggest recommendations aimed at disentangling the complexities generative systems bring to the accountability relationship. As we speculate on the technologies disruptive impact on accountability, we urge public servants, policymakers, and system designers to deliberate on the potential accountability impact generative systems produce prior to their deployment.
Since ChatGPT's debut, generative AI technologies have surged in popularity within the AI community. Recognized for their cutting-edge language processing capabilities, these excel in generating human-like conversations, enabling open-ended dialogues with end-users. We consider that the future adoption of generative AI for critical public domain applications transforms the accountability relationship. Previously characterized by the relationship between an actor and a forum, the introduction of generative systems complicates accountability dynamics as the initial interaction shifts from the actor to an advanced generative system. We conceptualise a dual-phase accountability relationship involving the actor, the forum, and the generative AI as a foundational approach to understanding public sector accountability in the context of these technologies. Focusing on integrating generative AI for assisting healthcare triaging, we identify potential challenges introduced for maintaining effective accountability relationships, highlighting concerns that these technologies relegate actors to a secondary phase of accountability and creates a disconnect between government actors and citizens. We suggest recommendations aimed at disentangling the complexities generative systems bring to the accountability relationship. As we speculate on the technologies disruptive impact on accountability, we urge public servants, policymakers, and system designers to deliberate on the potential accountability impact generative systems produce prior to their deployment.
BackgroundIn-hospital mortality, prolonged length of stay (LOS), and 30-day readmission are common outcomes in the intensive care unit (ICU). Traditional scoring systems and machine learning models for predicting these outcomes usually ignore the characteristics of ICU data, which are time-series forms. We aimed to use time-series deep learning models with the selective combination of three widely used scoring systems to predict these outcomes.Materials and methodsA retrospective cohort study was conducted on 40,083 patients in ICU from the Medical Information Mart for Intensive Care-IV (MIMIC-IV) database. Three deep learning models, namely, recurrent neural network (RNN), gated recurrent unit (GRU), and long short-term memory (LSTM) with attention mechanisms, were trained for the prediction of in-hospital mortality, prolonged LOS, and 30-day readmission with variables collected during the initial 24 h after ICU admission or the last 24 h before discharge. The inclusion of variables was based on three widely used scoring systems, namely, APACHE II, SOFA, and SAPS II, and the predictors consisted of time-series vital signs, laboratory tests, medication, and procedures. The patients were randomly divided into a training set (80%) and a test set (20%), which were used for model development and model evaluation, respectively. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and Brier scores were used to evaluate model performance. Variable significance was identified through attention mechanisms.ResultsA total of 33 variables for 40,083 patients were enrolled for mortality and prolonged LOS prediction and 36,180 for readmission prediction. The rates of occurrence of the three outcomes were 9.74%, 27.54%, and 11.79%, respectively. In each of the three outcomes, the performance of RNN, GRU, and LSTM did not differ greatly. Mortality prediction models, prolonged LOS prediction models, and readmission prediction models achieved AUCs of 0.870 ± 0.001, 0.765 ± 0.003, and 0.635 ± 0.018, respectively. The top significant variables co-selected by the three deep learning models were Glasgow Coma Scale (GCS), age, blood urea nitrogen, and norepinephrine for mortality; GCS, invasive ventilation, and blood urea nitrogen for prolonged LOS; and blood urea nitrogen, GCS, and ethnicity for readmission.ConclusionThe prognostic prediction models established in our study achieved good performance in predicting common outcomes of patients in ICU, especially in mortality prediction. In addition, GCS and blood urea nitrogen were identified as the most important factors strongly associated with adverse ICU events.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.