ChatGPT is receiving increasing attention and has a variety of application scenarios in clinical practice. In clinical decision support, ChatGPT has been used to generate accurate differential diagnosis lists, support clinical decision-making, optimize clinical decision support, and provide insights for cancer screening decisions. In addition, ChatGPT has been used for intelligent question-answering to provide reliable information about diseases and medical queries. In terms of medical documentation, ChatGPT has proven effective in generating patient clinical letters, radiology reports, medical notes, and discharge summaries, improving efficiency and accuracy for health care providers. Future research directions include real-time monitoring and predictive analytics, precision medicine and personalized treatment, the role of ChatGPT in telemedicine and remote health care, and integration with existing health care systems. Overall, ChatGPT is a valuable tool that complements the expertise of health care providers and improves clinical decision-making and patient care. However, ChatGPT is a double-edged sword. We need to carefully consider and study the benefits and potential dangers of ChatGPT. In this viewpoint, we discuss recent advances in ChatGPT research in clinical practice and suggest possible risks and challenges of using ChatGPT in clinical practice. It will help guide and support future artificial intelligence research similar to ChatGPT in health.
ChatGPT has promising applications in health care, but potential ethical issues need to be addressed proactively to prevent harm. ChatGPT presents potential ethical challenges from legal, humanistic, algorithmic, and informational perspectives. Legal ethics concerns arise from the unclear allocation of responsibility when patient harm occurs and from potential breaches of patient privacy due to data collection. Clear rules and legal boundaries are needed to properly allocate liability and protect users. Humanistic ethics concerns arise from the potential disruption of the physician-patient relationship, humanistic care, and issues of integrity. Overreliance on artificial intelligence (AI) can undermine compassion and erode trust. Transparency and disclosure of AI-generated content are critical to maintaining integrity. Algorithmic ethics raise concerns about algorithmic bias, responsibility, transparency and explainability, as well as validation and evaluation. Information ethics include data bias, validity, and effectiveness. Biased training data can lead to biased output, and overreliance on ChatGPT can reduce patient adherence and encourage self-diagnosis. Ensuring the accuracy, reliability, and validity of ChatGPT-generated content requires rigorous validation and ongoing updates based on clinical practice. To navigate the evolving ethical landscape of AI, AI in health care must adhere to the strictest ethical standards. Through comprehensive ethical guidelines, health care professionals can ensure the responsible use of ChatGPT, promote accurate and reliable information exchange, protect patient privacy, and empower patients to make informed decisions about their health care.
Background Heart failure (HF) is a common disease and a major public health problem. HF mortality prediction is critical for developing individualized prevention and treatment plans. However, due to their lack of interpretability, most HF mortality prediction models have not yet reached clinical practice. Objective We aimed to develop an interpretable model to predict the mortality risk for patients with HF in intensive care units (ICUs) and used the SHapley Additive exPlanation (SHAP) method to explain the extreme gradient boosting (XGBoost) model and explore prognostic factors for HF. Methods In this retrospective cohort study, we achieved model development and performance comparison on the eICU Collaborative Research Database (eICU-CRD). We extracted data during the first 24 hours of each ICU admission, and the data set was randomly divided, with 70% used for model training and 30% used for model validation. The prediction performance of the XGBoost model was compared with three other machine learning models by the area under the curve. We used the SHAP method to explain the XGBoost model. Results A total of 2798 eligible patients with HF were included in the final cohort for this study. The observed in-hospital mortality of patients with HF was 9.97%. Comparatively, the XGBoost model had the highest predictive performance among four models with an area under the curve (AUC) of 0.824 (95% CI 0.7766-0.8708), whereas support vector machine had the poorest generalization ability (AUC=0.701, 95% CI 0.6433-0.7582). The decision curve showed that the net benefit of the XGBoost model surpassed those of other machine learning models at 10%~28% threshold probabilities. The SHAP method reveals the top 20 predictors of HF according to the importance ranking, and the average of the blood urea nitrogen was recognized as the most important predictor variable. Conclusions The interpretable predictive model helps physicians more accurately predict the mortality risk in ICU patients with HF, and therefore, provides better treatment plans and optimal resource allocation for their patients. In addition, the interpretable framework can increase the transparency of the model and facilitate understanding the reliability of the predictive model for the physicians.
Background Heart failure (HF) is a common clinical syndrome associated with substantial morbidity, a heavy economic burden, and high risk of readmission. eHealth self-management interventions may be an effective way to improve HF clinical outcomes. Objective The aim of this study was to systematically review the evidence for the effectiveness of eHealth self-management in patients with HF. Methods This study included only randomized controlled trials (RCTs) that compared the effects of eHealth interventions with usual care in adult patients with HF using searches of the EMBASE, PubMed, CENTRAL (Cochrane Central Register of Controlled Trials), and CINAHL databases from January 1, 2011, to July 12, 2022. The Cochrane Risk of Bias tool (RoB 2) was used to assess the risk of bias for each study. The Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) criteria were used to rate the certainty of the evidence for each outcome of interest. Meta-analyses were performed using Review Manager (RevMan v.5.4) and R (v.4.1.0 x64) software. Results In total, 24 RCTs with 9634 participants met the inclusion criteria. Compared with the usual-care group, eHealth self-management interventions could significantly reduce all-cause mortality (odds ratio [OR] 0.83, 95% CI 0.71-0.98, P=.03; GRADE: low quality) and cardiovascular mortality (OR 0.74, 95% CI 0.59-0.92, P=.008; GRADE: moderate quality), as well as all-cause readmissions (OR 0.82, 95% CI 0.73-0.93, P=.002; GRADE: low quality) and HF-related readmissions (OR 0.77, 95% CI 0.66-0.90, P<.001; GRADE: moderate quality). The meta-analyses also showed that eHealth interventions could increase patients’ knowledge of HF and improve their quality of life, but there were no statistically significant effects. However, eHealth interventions could significantly increase medication adherence (OR 1.82, 95% CI 1.42-2.34, P<.001; GRADE: low quality) and improve self-care behaviors (standardized mean difference –1.34, 95% CI –2.46 to –0.22, P=.02; GRADE: very low quality). A subgroup analysis of primary outcomes regarding the enrolled population setting found that eHealth interventions were more effective in patients with HF after discharge compared with those in the ambulatory clinic setting. Conclusions eHealth self-management interventions could benefit the health of patients with HF in various ways. However, the clinical effects of eHealth interventions in patients with HF are affected by multiple aspects, and more high-quality studies are needed to demonstrate effectiveness.
BACKGROUND Heart failure (HF) is a common disease and a major public health problem. The HF mortality prediction is critical for developing individualized prevention and treatment plans. However, due to their lack of interpretability, most HF mortality prediction models have not yet reached clinical practice. OBJECTIVE We aimed to develop an interpretable model to predict the risk mortality for HF patients in intensive care unit (ICU) and use the Shapley additive explanation (SHAP) method to explain the extreme gradient boosting (XGBoost) model and explore prognostic factors for HF. METHODS In this retrospective cohort study, we achieved model development and performance comparison on the eICU Collaborative Research Database (eICU-CRD). We extracted data during the first 24 hours of each ICU admission and the data set was randomly divided, with 70% used for model training and 30% for model validation. The prediction performance of the XGBoost model was compared with three other machine learning models by the areas under the receiver operating characteristic curve (AUROC). Moreover, we used the Shapley additive explanation SHAP method to explain the XGBoost model. RESULTS A total of 2,798 eligible patients with HF were included in the final cohort for this study. The observed in-hospital mortality of patients with HF was 9.97%. Comparatively, the XGBoost model had the highest predictive performance among four models (AUC 0.824, 95% Confidence Interval (CI) 0.7766 to 0.8708), while support vector machine (SVM) had the poorest generalization ability (AUC 0.701, 95% CI 0.6433 to 0.7582). The decision curve showed that the net benefit of the XGBoost model surpassed those of other machine learning models at 10%~28% threshold probabilities. The SHAP method reveals the top 20 predictors of HF according to the importance ranking and the average of the blood urea nitrogen was recognized as the most important predictor variable. CONCLUSIONS The interpretable predictive model helps physicians more accurately predict the mortality risk in ICU Patients with HF. This will help physicians to provide better treatment plan and optimal resource allocation for their patients. In addition, the interpretable framework can increase the transparency of the model and facilitates physicians to understand the reliability of the predictive model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.