2021
DOI: 10.1016/s2589-7500(21)00208-9
|View full text |Cite
|
Sign up to set email alerts
|

The false hope of current approaches to explainable artificial intelligence in health care

Abstract: The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care workforce, provide transparency into the AI decision making process, and potentially mitigate various kinds of bias. In this Viewpoint, we argue that this argument represents a false hope for explainable AI and that current explainability methods are unlikely t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
378
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 649 publications
(384 citation statements)
references
References 30 publications
4
378
0
2
Order By: Relevance
“…Taking into account the computational limitations of the device, using algorithms with a short training time/low complexity could be of great benefit in dehydration monitoring from data. In terms of interpretability and explainable AI, tree-based models have also proven to achieve the best interpretability characteristics by design [31]. Interpretability is especially required by healthcare applications to provide an informed and clear justification for the model decisions.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Taking into account the computational limitations of the device, using algorithms with a short training time/low complexity could be of great benefit in dehydration monitoring from data. In terms of interpretability and explainable AI, tree-based models have also proven to achieve the best interpretability characteristics by design [31]. Interpretability is especially required by healthcare applications to provide an informed and clear justification for the model decisions.…”
Section: Discussionmentioning
confidence: 99%
“…Interpretability is especially required by healthcare applications to provide an informed and clear justification for the model decisions. For other types of black-box models that are not interpretable, explainable AI techniques, such as features relevance and visualizations, are utilized to promote confidence in the model, fairness, and informativeness [31]. Using SHAP [32] to visualize the Shapley values for how the features impact the output of the model is shown in Figure 9.…”
Section: Discussionmentioning
confidence: 99%
“…On the other hand, the current technology does not allow for this type of information to be actually available or to realistically expect this in the next few years. Some domain experts have already proposed that our attention should not be focused on "understanding" DL models, but rather on requiring strong validation alone [97]. In any case, a consensus should be reached on the actual requirements of radiomics and ML software prior to their approval for clinical use.…”
Section: Discussionmentioning
confidence: 99%
“…Limited user adoption—due to lack of clinician trust and model interpretability among many other reasons—has long been cited as a key barrier to clinical impact 2 , 3 . Encouraging providers to thoughtfully incorporate a model’s prediction into their decision and ultimate behavior regarding patient care—particularly in scenarios where predictions by the model and the human diverge—is a challenge with no clear solution yet.…”
Section: Building Out the Implementation Science Of Aimentioning
confidence: 99%