2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 2021
DOI: 10.1109/iccvw54120.2021.00462
|View full text |Cite
|
Sign up to set email alerts
|

SVEA: A Small-scale Benchmark for Validating the Usability of Post-hoc Explainable AI Solutions in Image and Signal Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…There are specialized benchmarks in the literature, like the SVEA benchmark [29]. The latter focuses on computer vision tasks and proposes faster evaluations based on the small mnist-1D dataset [30].…”
Section: Benchmark For Xai Algorithmsmentioning
confidence: 99%
“…There are specialized benchmarks in the literature, like the SVEA benchmark [29]. The latter focuses on computer vision tasks and proposes faster evaluations based on the small mnist-1D dataset [30].…”
Section: Benchmark For Xai Algorithmsmentioning
confidence: 99%
“…For instance, Sokol and Flach suggest a taxonomy defining criteria an explanatory method has to satisfy to be considered usable, summarized in an "Explainability Fact Sheet". This theoretical groundwork sparked generation of several practical validation frameworks, focusing on function level validation of explanation approaches [3,8,50,51,59]. For instance, these frameworks evaluate explanations in terms of their accuracy and fidelity [3,51,59,68], or robustness [8].…”
Section: Introductionmentioning
confidence: 99%
“…This theoretical groundwork sparked generation of several practical validation frameworks, focusing on function level validation of explanation approaches [3,8,50,51,59]. For instance, these frameworks evaluate explanations in terms of their accuracy and fidelity [3,51,59,68], or robustness [8].…”
Section: Introductionmentioning
confidence: 99%
“…Alongside novel explainability approaches, authors have proposed evaluation criteria and guidelines to systematically assess XAI approaches in terms of their usability (Doshi-Velez and Kim, 2017;Arrieta et al, 2020;Davis et al, 2020;Sokol and Flach, 2020a). This theoretical groundwork sparked several practical validation frameworks, commonly evaluating explanations in terms of accuracy and fidelity (White and d'Avila Garcez, 2020;Pawelczyk et al, 2021;Sattarzadeh et al, 2021;Arras et al, 2022), or robustness (Artelt et al, 2021). However, while XAI taxonomies repeatedly emphasize the need for humanlevel validation of explanation approaches (Doshi-Velez and Kim, 2017;Sokol and Flach, 2020a), user evaluations of XAI approaches often face limitations concerning statistical power and reproducibility (Keane et al, 2021).…”
mentioning
confidence: 99%