2024
DOI: 10.1016/j.jpurol.2023.08.003
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the performance of ChatGPT in answering questions related to pediatric urology

Ufuk Caglar,
Oguzhan Yildiz,
Arda Meric
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(8 citation statements)
references
References 11 publications
2
6
0
Order By: Relevance
“…A similar rate was obtained in the study of Çaglar et al, including 137 questions about pediatric urology; 92% of the questions were answered correctly by ChatGPT ( 5 ). The same study stated that 5.1% of the responses to all questions were correct but insufficient, and 2.9% contained correct information along with misleading information ( 5 ).…”
Section: Discussionsupporting
confidence: 82%
See 1 more Smart Citation
“…A similar rate was obtained in the study of Çaglar et al, including 137 questions about pediatric urology; 92% of the questions were answered correctly by ChatGPT ( 5 ). The same study stated that 5.1% of the responses to all questions were correct but insufficient, and 2.9% contained correct information along with misleading information ( 5 ).…”
Section: Discussionsupporting
confidence: 82%
“…Despite having limited access to medical data, ChatGPT performs at the level of a third-year medical student in licensing exams, encouraging discussions on emergency medicine within medicine (3). For example, pediatric urology questions were answered very well in a study conducted using text-based artificial intelligence modeling (5). Although ChatGPT is thought to be promising in producing consistent responses, it is important to determine the accuracy of the medical information it provides.…”
Section: Introductionmentioning
confidence: 99%
“…The outputs generated by the language model were generally satisfactory and aligned with current medical guidelines. 69 Similarly, in another study, ChatGPT was tasked with answering frequently asked questions encountered in pediatric cases, such as fever management, appropriate antipyretic dosages, and identification of red flag symptoms. The outputs were deemed moderately accurate and consistent.…”
Section: Cracking the Code—predictive Modeling And Machine Learningmentioning
confidence: 99%
“…The scoring system, however, has been adapted from similar previous studies. 6 Lastly, the public intelligibility of ChatGPT's answers was not evaluated, which is something that should be assessed in the future.…”
Section: Sources Cetin Et Al Assessed the Quality Of Youtube Videos A...mentioning
confidence: 99%
“…5 Caglar et al found that the answers provided by ChatGPT to patients' questions about pediatric urology and questions prepared based on pediatric urology guidelines were accurate and adequate. 6 Moreover, Gilson et al revealed that ChatGPT was successful in answering medical school examination questions. 7 ChatGPT could also interpret radiological imaging findings with an acceptable error rate.…”
mentioning
confidence: 99%