2022 IEEE International Conference on Image Processing (ICIP) 2022
DOI: 10.1109/icip46576.2022.9897629
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI (XAI) In Biomedical Signal and Image Processing: Promises and Challenges

Abstract: Artificial intelligence has become pervasive across disciplines and fields, and biomedical image and signal processing is no exception. The growing and widespread interest on the topic has triggered a vast research activity that is reflected in an exponential research effort. Through study of massive and diverse biomedical data, machine and deep learning models have revolutionized various tasks such as modeling, segmentation, registration, classification and synthesis, outperforming traditional techniques. How… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…Be more specific, the XAI applied to biomedical signal and image processing has been further discussed in ref. [43], which also provides inspiration for a more generalized direction of our research afterwards.…”
Section: Discussionmentioning
confidence: 97%
“…Be more specific, the XAI applied to biomedical signal and image processing has been further discussed in ref. [43], which also provides inspiration for a more generalized direction of our research afterwards.…”
Section: Discussionmentioning
confidence: 97%
“…When broadening our view to the biomedical or healthcare domain in general, there are some reviews (e.g. [ 16 , 21 , 27 ]) that capture current use cases of XAI for biomedical and healthcare data, as well as the ethical and legal debate surrounding the topic. However, research that synthesizes the scattered literature on XAI for omics data is scarce.…”
Section: Objectivesmentioning
confidence: 99%
“…This is a strong limitation when the interpretability of the model is a requirement, for example when experts are interested in exploring the feature space for determining feature importance [14]. To alleviate this shortcoming, much research on Explainable Artificial Intelligence (XAI) [15,16] has been conducted in recent years, aiming to add the ability to get human-understandable explanations of the model's reasoning; that is, making black-box models more transparent [17,18]. Amongst the most popular explainable ML approaches, we find Local Interpretable Model-agnostic Explanations (LIME) [19] and SHapley Additive exPlanations (SHAP) [20].…”
Section: Introductionmentioning
confidence: 99%