2023
DOI: 10.1097/ju.0000000000003615
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the Effectiveness of Artificial Intelligence–powered Large Language Models Application in Disseminating Appropriate and Readable Health Information in Urology

Ryan Davis,
Michael Eppler,
Oluwatobiloba Ayo-Ajibola
et al.

Abstract: Study Need and Importance: In 2022, Version 3.5 of ChatGPT, an artificial intelligenceepowered large language model (LLM) was released. Its adoption immediately burgeoned, and given that patients most commonly use the Internet as a primary medical information source, there is reason to believe they will adopt ChatGPT for medical information too. Urological patients may be particularly likely to use ChatGPT, as situations requiring urological care are broad ranging, with diverse treatment options from office pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 54 publications
(17 citation statements)
references
References 24 publications
0
16
1
Order By: Relevance
“…Researchers have particularly emphasized that natural language processors have limitations as a source of medical information. [ 42 ]…”
Section: Discussionmentioning
confidence: 99%
“…Researchers have particularly emphasized that natural language processors have limitations as a source of medical information. [ 42 ]…”
Section: Discussionmentioning
confidence: 99%
“…Much has been made of the practical utility of Chat Generative Pre‐Trained Transformer (ChatGPT) and other large language models (LLMs) in healthcare and urology. Although some papers have assessed their use as patient education tools [1,2], nothing has yet been published on their practical use within the cancer multidisciplinary team (MDT) meeting. We sought to test ChatGPT's treatment recommendations for patients with prostate cancer against those made in a real‐world MDT meeting.…”
Section: Vignette (Unedited From Real‐world Mdt Meeting) Chatgpt Resp...mentioning
confidence: 99%
“…Davis et al (page 688) from Los Angeles, California, aimed to assess the suitability and readability of natural language processor–generated responses to urology-related inquiries. 2 Common patient questions were used as inputs in ChatGPT, covering oncologic, benign, and emergency categories, and assessed for accuracy, comprehensiveness, and clarity. The study found that 77.8% of the responses were deemed appropriate, with clarity receiving the highest scores.…”
Section: Chatgpt As a Source Of Urological Patient Informationmentioning
confidence: 99%