2024
DOI: 10.1097/upj.0000000000000490
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of ChatGPT and Traditional Patient Education Materials for Men’s Health

Yash B. Shah,
Anushka Ghosh,
Aaron R. Hochberg
et al.

Abstract: Introduction:ChatGPT is an artificial intelligence platform available to patients seeking medical advice. Traditionally, urology patients consulted official provider-created materials, particularly the Urology Care FoundationÔ (UCF). Today, men increasingly go online due to the rising costs of health care and the stigma surrounding sexual health. Online health information is largely inaccessible to laypersons as it exceeds the recommended American sixth to eighth grade reading level. We conducted a comparative… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…Notably, ChatGPT has the known ability to adjust reading levels of texts based on education level. 15 Further study of implementation of this AI tool into health care may consider whether responses generated by ChatGPT retain clarity, comprehensiveness, and accuracy when reading level of responses are requested to be appropriate for the general patient population. It should also be noted that responses from the two sources were of varying length, which may have contributed to complexity bias and affected scoring from participants.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Notably, ChatGPT has the known ability to adjust reading levels of texts based on education level. 15 Further study of implementation of this AI tool into health care may consider whether responses generated by ChatGPT retain clarity, comprehensiveness, and accuracy when reading level of responses are requested to be appropriate for the general patient population. It should also be noted that responses from the two sources were of varying length, which may have contributed to complexity bias and affected scoring from participants.…”
Section: Discussionmentioning
confidence: 99%
“…A possible explanation for why patients indicated a preference for more complex material could be a welldocumented cognitive error known as "complexity bias," wherein people subconsciously demonstrate a preference for the complicated over the simple. 14 This logical fallacy may have prompted non-medical individuals to give greater credence to the more complex AI-generated material over the more readable ASRM material, although both were well above recommended readability levels. Notably, ChatGPT has the known ability to adjust reading levels of texts based on education level.…”
Section: Accepted Manuscriptmentioning
confidence: 99%
“…Current automated simplification methods scored poorly due to grammatical errors, repetition, and inconsistencies in the autogenerated documents [ 68 ]. Artificial intelligence–derived text simplification methods may overcome these barriers by matching a document’s reading level to the readers’ needs, as shown in a study where ChatGPT was able to modify answers to men’s health condition questions to accommodate lower reading levels [ 69 , 70 ]. However, popularly used AI tools, such as ChatGPT, need considerable evaluation to minimize inaccurate information delivery and improve comprehensibility.…”
Section: Discussionmentioning
confidence: 99%
“…However, popularly used AI tools, such as ChatGPT, need considerable evaluation to minimize inaccurate information delivery and improve comprehensibility. Current studies indicate that these tools lack citations for the information they provide and cannot differentiate between low-quality and high-quality information [ 70 , 71 ].…”
Section: Discussionmentioning
confidence: 99%
“…Another study published by the American Urological Association compared readability between patient education materials created by urologists and responses from ChatGPT version 3.5. ChatGPT had significantly poorer readability than provider-created articles across all topics that were tested, despite being prompted to provide responses at a sixth-grade reading level (all p<0.001) [ 25 ]. It is important to identify opportunities and limitations for chatbot use in patient care settings so we can maximize its impact.…”
Section: Discussionmentioning
confidence: 99%