2023
DOI: 10.1097/md.0000000000034068
|View full text |Cite
|
Sign up to set email alerts
|

Assessing ChatGPT’s capacity for clinical decision support in pediatrics: A comparative study with pediatricians using KIDMAP of Rasch analysis

Abstract: Background: The application of large language models in clinical decision support (CDS) is an area that warrants further investigation. ChatGPT, a prominent large language models developed by OpenAI, has shown promising performance across various domains. However, there is limited research evaluating its use specifically in pediatric clinical decision-making. This study aimed to assess ChatGPT’s potential as a CDS tool in pediatrics by evCDSaluating its performance on 8 common clinical symptom prompts. Study o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(14 citation statements)
references
References 39 publications
0
14
0
Order By: Relevance
“…This aligns with the growing trend of AI integration in healthcare, where ease of use is paramount for widespread adoption [35]. However, the participants’ perception of ChatGPT as a “limited” source, primarily due to its dependence on pre-existing datasets and the absence of real-time and accurate data updates, highlights a critical aspect of AI in healthcare: the need for continuous AI models in learning and adaptation to verified yet evolving medical knowledge [36].…”
Section: Discussionmentioning
confidence: 79%
See 1 more Smart Citation
“…This aligns with the growing trend of AI integration in healthcare, where ease of use is paramount for widespread adoption [35]. However, the participants’ perception of ChatGPT as a “limited” source, primarily due to its dependence on pre-existing datasets and the absence of real-time and accurate data updates, highlights a critical aspect of AI in healthcare: the need for continuous AI models in learning and adaptation to verified yet evolving medical knowledge [36].…”
Section: Discussionmentioning
confidence: 79%
“…The application of ChatGPT as an organizer and information source reflects its perceived role as an adjunct, rather than a replacement, in clinical decision-making processes [36]. This is particularly noteworthy in the context of pediatric intensive care, where the complexity and variability of cases necessitate human expertise and judgment.…”
Section: Discussionmentioning
confidence: 99%
“…In [39] it was found that ChatGPT correctly answered 74% of the trivia questions related to heart diseases. Specifically, the accuracy of ChatGPT scored impressively in the domains of coronary artery disease (80%), pulmonary and venous thrombotic embolism (80%), atrial fibrillation (70%), heart failure (80%) and cardiovascular risk management (60%) [40] evaluated ChatGPT as a support tool for breast tumor board decision making [41] assessed ChatGPT's capacity for clinical decision support in paediatrics [7] evaluated the capacity of ChatGPT as a clinical decision support in triaging patients for appropriate imaging services [42] did a comparative analysis of humans and LLMs in decision making abilities. The analysis found that there was a moderate level of agreement between the decisions of humans and LLMs.…”
Section: Llm As a Decision Support Toolmentioning
confidence: 99%
“…Some authors performed studies in a clinical setting, assessing ChatGPT's performance in discriminating symptoms and providing possible diagnoses, hence assisting with the decision-making process. It appears that with the right inputs, ChatGPT is often able to perform well in the clinical scenario, addressing accurate diagnosis and possible treatment options [18]. Whiles et al [19] conducted the first study to examine the accuracy of ChatGPT in patient counselling responses.…”
Section: Assisting Urologists In Decision-makingmentioning
confidence: 99%