2021
DOI: 10.1136/medethics-2021-107440
|View full text |Cite
|
Sign up to set email alerts
|

Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts

Abstract: In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent (AI) could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion in c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(26 citation statements)
references
References 37 publications
0
26
0
Order By: Relevance
“…The authors of the article, Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts, provide an informed discussion of how AI-DSS may be used, both practically and ethically, to assist healthcare professionals in cooperative diagnostic processes2. The authors propose a process, whereby AI-DSS would provide a physician a second opinion, and when there is a mismatch between opinions, another physician would provide a third opinion.…”
mentioning
confidence: 99%
“…The authors of the article, Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts, provide an informed discussion of how AI-DSS may be used, both practically and ethically, to assist healthcare professionals in cooperative diagnostic processes2. The authors propose a process, whereby AI-DSS would provide a physician a second opinion, and when there is a mismatch between opinions, another physician would provide a third opinion.…”
mentioning
confidence: 99%
“…Central to the authors’ relegation of AI-DSS to a confirmatory role, as opposed to fully substituting as a second opinion, is their contention that an ‘equal views principle’ cannot apply to heterogeneous agents (ie, AI and humans). An AI can contradict a physician but cannot explain itself or enter into a ‘peer disagreement’ because it cannot engage in dialectic or reason-giving the way a physician giving a second opinion can 1. In this way, the physician–AI relationship creates an asymmetry not present in an ordinary physician–physician peer relationship by burdening the physician with the responsibility of interpreting the AIs outputs.…”
mentioning
confidence: 99%
“…The authors’ view has several advantages as noted, but it seems to rely on an idealised picture of highly conscientious and rigorous physician–physician dialectic being the norm for second opinions, and this premise may be brought into question. The authors themselves acknowledge that the process of requesting a second opinion ‘is most often of informal nature’ and ‘a part of everyday clinical life’ 1. Informality or routinisation do not alone imply a lack of diligence, but without empirical study, it seems difficult to know that the content of second opinion dialogue is uniformly oriented towards reason-giving and evidential argumentation, as opposed to an exchange of clinical gestalt.…”
mentioning
confidence: 99%
“…In their paper ‘Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts’, Kempt and Nagel discuss the use of medical AI systems and the resulting need for second opinions by human physicians, when physicians and AI disagree, which they call the rule of disagreement (RoD) 1. The authors defend RoD based on three premises: First, they argue that in cases of disagreement in medical practice, there is an increased burden of proof (better to be conceived as a burden for justification) for the physician in charge, to defend why the opposing view is adopted or overridden.…”
mentioning
confidence: 99%
“…burden for justification for cases of agreement and disagreement that we defend has implications for the RoD of human–AI systems as proposed by Kempt and Nagel. RoD suggests that ‘[i]f a diagnosis provided by an autonomous AI diagnostic system contradicts the initial diagnosis of the physician-in-charge, it shall count as disagreement requiring a second opinion of another physician’ 1. We believe that it follows from the symmetry of agreement and disagreement that the requirement of second opinion must also be applied to cases of agreement.…”
mentioning
confidence: 99%