2023
DOI: 10.1007/s00481-023-00761-x
|View full text |Cite
|
Sign up to set email alerts
|

Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?

Abstract: Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 77 publications
0
2
0
Order By: Relevance
“…If AI were tailored within the boundaries of clarified roles, this would allow the systems to be reliably seen as tools that help to enhance professionals’ abilities, rather than as a rival trying to render the user redundant. The aforementioned human oversight goes hand in hand with the prerequisite of transparency, which not only refers to the often mentioned explicability of the algorithms per se, but also the transparency that AI is being used at all, which is important to disclose to patients to maintain a trustful relationship [ 38 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…If AI were tailored within the boundaries of clarified roles, this would allow the systems to be reliably seen as tools that help to enhance professionals’ abilities, rather than as a rival trying to render the user redundant. The aforementioned human oversight goes hand in hand with the prerequisite of transparency, which not only refers to the often mentioned explicability of the algorithms per se, but also the transparency that AI is being used at all, which is important to disclose to patients to maintain a trustful relationship [ 38 ].…”
Section: Discussionmentioning
confidence: 99%
“…Professional bodies, such as radiological associations, justify the explicability mostly as a vehicle for the principle of non-maleficence because there is a need to reduce harms inflicted by performance errors of medical AI [ 43 ]. General solutions for explainable AI in medicine are inherently interpretable models, feature visualization, prototypes, counterfactuals or feature attribution [ 38 ].…”
Section: Discussionmentioning
confidence: 99%