2022
DOI: 10.3390/diagnostics12071557
|View full text |Cite
|
Sign up to set email alerts
|

Designing an Interpretability-Based Model to Explain the Artificial Intelligence Algorithms in Healthcare

Abstract: The lack of interpretability in artificial intelligence models (i.e., deep learning, machine learning, and rules-based) is an obstacle to their widespread adoption in the healthcare domain. The absence of understandability and transparency frequently leads to (i) inadequate accountability and (ii) a consequent reduction in the quality of the predictive results of the models. On the other hand, the existence of interpretability in the predictions of AI models will facilitate the understanding and trust of the c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(21 citation statements)
references
References 33 publications
0
19
0
Order By: Relevance
“…An uncertainty measure is needed that is based on the predictions and noise distribution but also integrating the uncertainty propagation of the DL model prediction 78,79 (like the inverse of the Fisher information matrix used in the CRLB definition 74 ). Despite flourishing literature, 80,81 addressing uncertainty estimation as a complementary tool for DL interpretability, a full‐scale analysis of the robustness and reliability of such models is still challenging 82–84 . First attempts to extend these concepts in DL for MRS quantification are just subject of recent investigations 76,77 but far from general acceptance.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…An uncertainty measure is needed that is based on the predictions and noise distribution but also integrating the uncertainty propagation of the DL model prediction 78,79 (like the inverse of the Fisher information matrix used in the CRLB definition 74 ). Despite flourishing literature, 80,81 addressing uncertainty estimation as a complementary tool for DL interpretability, a full‐scale analysis of the robustness and reliability of such models is still challenging 82–84 . First attempts to extend these concepts in DL for MRS quantification are just subject of recent investigations 76,77 but far from general acceptance.…”
Section: Discussionmentioning
confidence: 99%
“…Despite flourishing literature, 80,81 addressing uncertainty estimation as a complementary tool for DL interpretability, a full-scale analysis of the robustness and reliability of such models is still challenging. [82][83][84] First attempts to extend these concepts in DL for MRS quantification are just subject of recent investigations 76,77 but far from general acceptance.…”
Section: Low Snr Regimementioning
confidence: 99%
“…Albeit a multitude of different AI approaches are being applied in the aforementioned datasets for COVID-19 (e.g., Random Forest, Logistic regression) [9] , there are still several lingering caveats considering predominantly the lack of interpretability and explainability (“black box” challenge) [33] . Acknowledging these hurdles, in the herein work we present a benchmarking pipeline for various ML classifiers based on COVID-19 plasma proteomics (3 datasets based on Olink PEA technology encompassing detailed clinical covariates) engaging an “interpretable” AI approach [34] .…”
Section: Discussionmentioning
confidence: 99%
“…The ability to interpret AI model decisions becomes crucial for gaining the trust of healthcare professionals and patients. Developing AI models that can provide explanations for their decisions by highlighting relevant features and biomarkers is therefore crucial for their adoption in clinical settings [ 42 ].…”
Section: Interpretability Of Ai Modelsmentioning
confidence: 99%