2023
DOI: 10.1001/jamahealthforum.2023.1938
|View full text |Cite
|
Sign up to set email alerts
|

ChatGPT and Physicians’ Malpractice Risk

Abstract: This JAMA Forum discusses the possibilities, limitations, and risks of physician use of large language models (such as ChatGPT) along with the improvements required to improve the accuracy of the technology.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
24
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 52 publications
(29 citation statements)
references
References 7 publications
0
24
0
Order By: Relevance
“…We speculate that ChatGPT may be more useful as a supplementary tool for SDM under physician supervision than alone because ChatGPT tends to produce factually incorrect outputs, called hallucinations. 24 Future research must be directed at this issue. Fourth, the extent of SDM may have been overestimated because of desirability bias, as some patients completed the questionnaire in the waiting room.…”
Section: Discussionmentioning
confidence: 99%
“…We speculate that ChatGPT may be more useful as a supplementary tool for SDM under physician supervision than alone because ChatGPT tends to produce factually incorrect outputs, called hallucinations. 24 Future research must be directed at this issue. Fourth, the extent of SDM may have been overestimated because of desirability bias, as some patients completed the questionnaire in the waiting room.…”
Section: Discussionmentioning
confidence: 99%
“…Such a network could fill a critical gap in an ecosystem dominated by well-meaning but often overexuberant and inexperienced developers who lack the depth of understanding of health care delivery. Given that health AI more broadly, including genAI, is subject to existing liability regulation for health care systems and physicians, it is imperative that mechanisms are developed that use nationwide standards and best practices for testing and evaluation to ensure that the AI models developed for use in health care are trustworthy.…”
Section: Shared Resource For Development and Validationmentioning
confidence: 99%
“…Only after these evaluations are completed should statements be allowed such as an LLM was used for a defined task in this specific workflow, it measured a metric, and observed an improvement (or deterioration) in a prespecified outcome. Such evaluations also are necessary to clarify the medicolegal risks that might occur with the use of LLMs to guide medical care, and to identify mitigation strategies for the models’ tendency to generate factually incorrect outputs that are probabilistically plausible (called hallucinations).…”
Section: Are the Purported Value Propositions Of Using Llms In Medici...mentioning
confidence: 99%