2024
DOI: 10.1016/j.wneu.2023.11.062
|View full text |Cite
|
Sign up to set email alerts
|

Information Quality and Readability: ChatGPT's Responses to the Most Common Questions About Spinal Cord Injury

Mustafa Hüseyin Temel,
Yakup Erden,
Fatih Bağcıer
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(4 citation statements)
references
References 20 publications
1
3
0
Order By: Relevance
“…Utilizing the Flesch-Kincaid Reading Ease score and grade level, they demonstrated reading ease for treatment-related questions similar to our findings (47.67 ± 10.77 vs. 42.9 ± 12.4) but noted a higher grade level compared to ours. This echoes Tamel et al's findings [40]. To assess the response reliability, they applied the DISCERN tests for treatment-related questions in contrast to our approach, which involved compiling the most frequently asked patient questions.…”
Section: Comparative Analysis Of Chatgpt's Performancesupporting
confidence: 74%
See 1 more Smart Citation
“…Utilizing the Flesch-Kincaid Reading Ease score and grade level, they demonstrated reading ease for treatment-related questions similar to our findings (47.67 ± 10.77 vs. 42.9 ± 12.4) but noted a higher grade level compared to ours. This echoes Tamel et al's findings [40]. To assess the response reliability, they applied the DISCERN tests for treatment-related questions in contrast to our approach, which involved compiling the most frequently asked patient questions.…”
Section: Comparative Analysis Of Chatgpt's Performancesupporting
confidence: 74%
“…In their study, Temel et al [40] evaluated the responses generated by ChatGPT to inquiries related to spinal cord injuries by using the most frequently searched keywords. They found that the complexity of ChatGPT's responses, as indicated by a Flesch-Kincaid grade level of (14.84 ± 1.79), was significantly higher than that in our study, which recorded a grade level of (10.8 ± 2.2).…”
Section: Comparative Analysis Of Chatgpt's Performancementioning
confidence: 99%
“…Rather than presenting users with an overwhelming array of web pages, LLMs synthesize information into concise, coherent paragraphs that respond directly to the user's query, thereby facilitating patient education and setting the stage for more substantive discussions within the framework of SDM For LLMs to effectively contribute to SDM, it is imperative that the information they provide is accessible to patients, necessitating a literacy level commensurate with that of the intended audience. Recent analyses have indicated that outputs from models like ChatGPT often demand high literacy levels for comprehension, posing challenges in the context of patient education on complex medical topics (Dash et al 2023 ; Haver et al 2024 ; Onder et al 2024 ; Temel et al 2024 ). However, this limitation can be addressed through strategic prompting—the method by which a query is formulated by the user (Gao 2023 ).…”
Section: Bridging the Gap With Llmsmentioning
confidence: 99%
“…Numerous authors in various fields have tried to pose clinical questions. The results are variable but all authors conclude that thus far AI can't compete with a real doctor [30][31][32][33][34]. In a study on paediatric emergencies for example, ChatGPT/GPT-4 reliably advised to call emergency services only in 54% of the cases, gave correct first aid instructions in 45% and incorrectly advised advanced life support techniques to parents 13.6% [35].…”
Section: Plos Digital Healthmentioning
confidence: 99%