2023
DOI: 10.2196/48009
|View full text |Cite
|
Sign up to set email alerts
|

Ethical Considerations of Using ChatGPT in Health Care

Abstract: ChatGPT has promising applications in health care, but potential ethical issues need to be addressed proactively to prevent harm. ChatGPT presents potential ethical challenges from legal, humanistic, algorithmic, and informational perspectives. Legal ethics concerns arise from the unclear allocation of responsibility when patient harm occurs and from potential breaches of patient privacy due to data collection. Clear rules and legal boundaries are needed to properly allocate liability and protect users. Humani… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
77
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 195 publications
(78 citation statements)
references
References 81 publications
0
77
0
1
Order By: Relevance
“…The use of ChatGPT to produce patient education handouts confers a set of ethical concerns, related to legal, humanistic, and algorithmic issues. 17 Given the recent advent of AI, it remains not entirely clear how the burden of legal responsibility should be partitioned between physician and OpenAI, especially for cases of patient harm and/ or privacy breaches. 17 Ultimately, clearer rules on the use of AI and related responsibilities in health care are yet to be instated.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…The use of ChatGPT to produce patient education handouts confers a set of ethical concerns, related to legal, humanistic, and algorithmic issues. 17 Given the recent advent of AI, it remains not entirely clear how the burden of legal responsibility should be partitioned between physician and OpenAI, especially for cases of patient harm and/ or privacy breaches. 17 Ultimately, clearer rules on the use of AI and related responsibilities in health care are yet to be instated.…”
Section: Resultsmentioning
confidence: 99%
“…17 Given the recent advent of AI, it remains not entirely clear how the burden of legal responsibility should be partitioned between physician and OpenAI, especially for cases of patient harm and/ or privacy breaches. 17 Ultimately, clearer rules on the use of AI and related responsibilities in health care are yet to be instated. 17 One safe approach involves carefully reviewing the entirety of the initial draft, with an understanding of assuming full responsibility for any harms, necessarily in exchange for the efficiency of the model.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…There is also dialogue on the use of AI in publishing covering topics such as editorial decision-making (COPE Council, 2021). In comparison, in the broader literature, there is a burgeoning of publications on AI related to authorship, ethics, plagiarism, bias, accuracy, performance, and scientific integrity (see, for example, Ashraf & Ashfaq, 2023;Baker et al, 2023;Cohen et al, 2023;Conroy et al, 2023;Dergaa et al, 2023;Dien, 2023;Doshi et al, 2023;Doyal et al, 2023;Farina & Lavazza, 2023;Garcia, 2023;Jeyaraman et al, 2023;Leung et al, 2023;Meyer et al, 2023;Ruffle et al, 2023;Takeda, 2023;Wang et al, 2023). However, there is a paucity of academic research on ChatGPT and qualitative research (Tabone & de Winter, 2023).…”
Section: What Does Chatgpt Mean For Qualitative Health Research?mentioning
confidence: 99%