2021
DOI: 10.1007/978-3-030-68796-0_3
|View full text |Cite
|
Sign up to set email alerts
|

Expert Level Evaluations for Explainable AI (XAI) Methods in the Medical Domain

Abstract: The recently emerged field of explainable artificial intelligence (XAI) attempts to shed lights on 'black box' Machine Learning (ML) models in understandable terms for human. As several explanation methods are developed alongside different applications for a black box model, the need for expert-level evaluation in inspecting their effectiveness becomes inevitable. This is significantly important for sensitive domains such as medical applications where evaluation of experts is essential to better understand how… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 25 publications
(17 citation statements)
references
References 22 publications
0
17
0
Order By: Relevance
“…Many of the sources do not make clear the extent to which GPs were involved in the research. In recent explainable artificial intelligence (XAI) efforts to solve various problems in the medical domain, for example, there is a growing need for doctors to be more involved in the development and evaluation of AI diagnostic support tools and systems [ 28 ]. It is not clear, however, whether this same need exists for administrative tasks in general practice, but the current level of GP involvement in administrative tasks appears low.…”
Section: Discussionmentioning
confidence: 99%
“…Many of the sources do not make clear the extent to which GPs were involved in the research. In recent explainable artificial intelligence (XAI) efforts to solve various problems in the medical domain, for example, there is a growing need for doctors to be more involved in the development and evaluation of AI diagnostic support tools and systems [ 28 ]. It is not clear, however, whether this same need exists for administrative tasks in general practice, but the current level of GP involvement in administrative tasks appears low.…”
Section: Discussionmentioning
confidence: 99%
“…Samek et al (2021) is one such example, classifying XAI algorithms by the various mathematical approaches of explanation generation: local surrogates, occlusions, gradient‐based techniques, and layer‐wise relevance propagation. XAI taxonomies also are studied for specific research domains, such as medical image analytics (Muddamsetty et al, 2021). Another way to taxonomize XAI is by function and medium (e.g., algorithms, visualization, audio, KGs, and plain text) (e.g., Rawal et al, 2021).…”
Section: Geographic Applications Of Xai Methods: State‐of‐the‐artmentioning
confidence: 99%
“…Model explainability is essential for gaining trust and acceptance of AI systems in high-stakes areas, such as healthcare, where reliability and safety are critical [43], [44]. Medical anomaly detection [45], healthcare risk prediction system [46], [47], [48], [49], genetics [50], [51], and healthcare image processing [52], [53], [54] are some of the areas that are moving towards adoption of XAI. Another area is finance, such as AI-based credit score decisions [55], [56] and counterfeit banknotes detection [57].…”
Section: Explainable Artificial Intelligence (Xai)mentioning
confidence: 99%