2022
DOI: 10.1200/jco.22.01113
|View full text |Cite
|
Sign up to set email alerts
|

A Process Framework for Ethically Deploying Artificial Intelligence in Oncology

Abstract: Author affiliations and support information (if applicable) appear at the end of this article.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
12
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 33 publications
0
12
0
Order By: Relevance
“…In particular, the avoidance of humanoid interfaces could help remind patient users that AI is not human, helping to preserve dignity. 20 PF-AI technologies may also risk violating nonmaleficence because of lack of regulatory oversight, risk for error, 21 and lack of transparency in training data sets and algorithms. Patients may mistakenly assume that these technologies have been thoroughly validated, potentially leading to overreliance.…”
Section: Ethical Implicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, the avoidance of humanoid interfaces could help remind patient users that AI is not human, helping to preserve dignity. 20 PF-AI technologies may also risk violating nonmaleficence because of lack of regulatory oversight, risk for error, 21 and lack of transparency in training data sets and algorithms. Patients may mistakenly assume that these technologies have been thoroughly validated, potentially leading to overreliance.…”
Section: Ethical Implicationsmentioning
confidence: 99%
“…In particular, the avoidance of humanoid interfaces could help remind patient users that AI is not human, helping to preserve dignity. 20…”
Section: Ethical Implicationsmentioning
confidence: 99%
“…Concerns have been raised over AI bias, explainability (ie, the ability of an AI model to explain how it reached a result), responsibility for error or misuse, and humans' deference to its results. [3][4][5] As the ethical deployment of AI in cancer care requires solutions that meet the needs of stakeholders, this study sought to examine oncologists' familiarity with AI and perspectives on these issues. As familiarity with a technology changes stakeholder perceptions of it, 6 and because academic research in AI is burgeoning, we hypothesized that responses would vary for oncologists practicing in academic settings compared with those in other practice settings.…”
Section: Introductionmentioning
confidence: 99%
“…Artificial intelligence models with applications for oncology have recently been approved by the US Food and Drug Administration (FDA), and the increasing complexity of personalized cancer care makes the field of oncology poised for an AI revolution. Concerns have been raised over AI bias, explainability (ie, the ability of an AI model to explain how it reached a result), responsibility for error or misuse, and humans’ deference to its results . As the ethical deployment of AI in cancer care requires solutions that meet the needs of stakeholders, this study sought to examine oncologists’ familiarity with AI and perspectives on these issues.…”
Section: Introductionmentioning
confidence: 99%
“…These include nuanced questions related to balancing goals of cure and symptom palliation, navigating therapeutic misconceptions and conflicts of interest, and the role of hope/hype in emerging technologies such as precision medicine and artificial intelligence in oncology. [1][2][3][4] Despite these ethical challenges, limited empirical work has been performed examining the role and practice of clinical ethics in oncology. The bioethics literature is sparse in this space, with but a handful of descriptive analyses of ethics consultation volume, ethicist training, and consult themes/content.…”
mentioning
confidence: 99%