2022
DOI: 10.1177/20552076221074488
|View full text |Cite
|
Sign up to set email alerts
|

Re-focusing explainability in medicine

Abstract: Using artificial intelligence to improve patient care is a cutting-edge methodology, but its implementation in clinical routine has been limited due to significant concerns about understanding its behavior. One major barrier is the explainability dilemma and how much explanation is required to use artificial intelligence safely in healthcare. A key issue is the lack of consensus on the definition of explainability by experts, regulators, and healthcare professionals, resulting in a wide variety of terminology … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(19 citation statements)
references
References 44 publications
0
19
0
Order By: Relevance
“…Therefore, there is an ethical obligation to inform oneself about vulnerable groups and conditions that may lead to inaccurate results in terms of accuracy, validity, uncertainty, and applicability as minimally acceptable criteria for interpretability (Arbelaez Ossa et al 2022). Furthermore, once risks have been identified, there is the obligation to inform affected patients about individual or group risks.…”
Section: Interpretabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, there is an ethical obligation to inform oneself about vulnerable groups and conditions that may lead to inaccurate results in terms of accuracy, validity, uncertainty, and applicability as minimally acceptable criteria for interpretability (Arbelaez Ossa et al 2022). Furthermore, once risks have been identified, there is the obligation to inform affected patients about individual or group risks.…”
Section: Interpretabilitymentioning
confidence: 99%
“…One may consider stratified risk levels (unacceptable, high, and low or minimal risk) as the EU's AI regulatory framework does (European Commission 2021), but the risks have to be specific. In the literature, there is the proposal to concentrate on justifiability and contestability in high-stakes situations (Henin and Le Métayer 2021) or to provide minimally acceptable criteria for explainability (Arbelaez Ossa et al 2022).…”
Section: Applying the Levels Of Explicabilitymentioning
confidence: 99%
“…One concern frequently brought to light by clinicians is the fear of errors caused by blindly trusting suggested guidelines generated by algorithms they do not understand (“black box”) 2,26,48 . However, systems’ explainability need probably not to be extensive but acceptable enough for clinicians and patients to apprehend ML‐CDSS’ implications and be incorporated safely into routine practice 49 . On the one hand, concerns of the users would greatly limit the development and use of ML but on the other hand full confidence could be dangerous too.…”
Section: Challenges Aheadmentioning
confidence: 99%
“…Therefore, each decision must be logically reasoned with explainable evidence [ 49 ]. AI models might be insightful for scientists, but they should also be sufficiently clear and explainable for end users to support their decisions [ 52 ]. Otherwise, it could constitute a threat to the patient’s autonomy.…”
Section: Interpretability and Validity Of Algorithmsmentioning
confidence: 99%