Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
BACKGROUND Emergency medicine can benefit from AI due to its unique challenges, such as high patient volume and the need for urgent interventions. However, it remains difficult to assess the applicability of AI systems to real-world emergency medicine practice, which requires not only medical knowledge but also adaptable problem-solving and effective communication skills. OBJECTIVE We aimed to evaluate ChatGPT's performance in comparison to human doctors in simulated emergency medicine settings, utilizing the framework of Clinical Performance Examination (CPX). METHODS Twenty-eight text-based cases and four image-based cases relevant to emergency medicine were selected. Twelve human doctors were recruited to represent the medical professionals. Both ChatGPT and the human doctors were instructed to manage each case like real clinical settings with simulated patients. After the CPX sessions, the conversation records were evaluated by an emergency medicine professor on history taking, clinical accuracy, and empathy on a 5-point Likert scale. Simulated patients completed a 5-point scale survey including overall comprehensibility, credibility, concern reduction for each case. Additionally, they evaluated whether the doctor they interacted with was similar to a human doctor. The mean scores from ChatGPT were then compared to those of the human doctors. RESULTS ChatGPT scored significantly higher than the physicians in both history-taking (mean score 3.91 [SD 0.67] vs. 2.67 [SD 0.78], P < 0.01) and empathy (mean score 4.50 [SD 0.67] vs. 1.75 [SD 0.62], P < 0.01). However, there was no significant difference in clinical accuracy. In the survey conducted with simulated patients, ChatGPT scored higher for concern reduction (mean score 4.33 [SD 0.78] vs. 3.58 [SD 0.90], P = 0.04). For comprehensibility and credibility, ChatGPT showed better performance, but the difference was not significant. In the similarity assessment score, no significant difference was observed (mean score 3.50 [SD 1.78] vs. 3.25 [SD 1.86], P = 0.71). CONCLUSIONS ChatGPT’s performance highlights its potential as a valuable adjunct in emergency medicine, demonstrating comparable proficiency in knowledge application, efficiency, and empathetic patient interaction. These results suggest that a collaborative healthcare model, integrating AI with human expertise, could enhance patient care and outcomes.
BACKGROUND Emergency medicine can benefit from AI due to its unique challenges, such as high patient volume and the need for urgent interventions. However, it remains difficult to assess the applicability of AI systems to real-world emergency medicine practice, which requires not only medical knowledge but also adaptable problem-solving and effective communication skills. OBJECTIVE We aimed to evaluate ChatGPT's performance in comparison to human doctors in simulated emergency medicine settings, utilizing the framework of Clinical Performance Examination (CPX). METHODS Twenty-eight text-based cases and four image-based cases relevant to emergency medicine were selected. Twelve human doctors were recruited to represent the medical professionals. Both ChatGPT and the human doctors were instructed to manage each case like real clinical settings with simulated patients. After the CPX sessions, the conversation records were evaluated by an emergency medicine professor on history taking, clinical accuracy, and empathy on a 5-point Likert scale. Simulated patients completed a 5-point scale survey including overall comprehensibility, credibility, concern reduction for each case. Additionally, they evaluated whether the doctor they interacted with was similar to a human doctor. The mean scores from ChatGPT were then compared to those of the human doctors. RESULTS ChatGPT scored significantly higher than the physicians in both history-taking (mean score 3.91 [SD 0.67] vs. 2.67 [SD 0.78], P < 0.01) and empathy (mean score 4.50 [SD 0.67] vs. 1.75 [SD 0.62], P < 0.01). However, there was no significant difference in clinical accuracy. In the survey conducted with simulated patients, ChatGPT scored higher for concern reduction (mean score 4.33 [SD 0.78] vs. 3.58 [SD 0.90], P = 0.04). For comprehensibility and credibility, ChatGPT showed better performance, but the difference was not significant. In the similarity assessment score, no significant difference was observed (mean score 3.50 [SD 1.78] vs. 3.25 [SD 1.86], P = 0.71). CONCLUSIONS ChatGPT’s performance highlights its potential as a valuable adjunct in emergency medicine, demonstrating comparable proficiency in knowledge application, efficiency, and empathetic patient interaction. These results suggest that a collaborative healthcare model, integrating AI with human expertise, could enhance patient care and outcomes.
Acute care provided in the hospital’s emergency department (ED) is a key component of the healthcare system that serves as an essential bridge between outpatient and inpatient care. However, due to the emergency-driven nature of presenting problems and the urgency of care required, the ED is more prone to unintended medication regimen changes than other departments. Ensuring quality use of medicines (QUM), defined as “choosing suitable medicines and using them safely and effectively”, remains a challenge in the ED and hence requires special attention. The role of pharmacists in the ED has evolved considerably, transitioning from traditional inventory management to delivering comprehensive clinical pharmacy services, such as medication reconciliation and review. Emerging roles for ED pharmacists now include medication charting and prescribing and active participation in resuscitation efforts. Additionally, ED pharmacists are involved in research and educational initiatives. However, the ED setting is still facing heightened service demands in terms of the number of patients presenting to EDs and longer ED stays. Addressing these challenges necessitates innovation and reform in ED care to effectively manage the complex, rising demand for ED care and to meet government-imposed service quality indicators. An example is redesigning the medication use process, which could necessitate a shift in skill mix or an expansion of the roles of ED pharmacists, particularly in areas such as medication charting and prescribing. Collaborative efforts between pharmacists and physicians have demonstrated positive outcomes and should thus be adopted as the standard practice in improving the quality use of medicines in the ED.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.