2022
DOI: 10.48550/arxiv.2202.06861
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

Abstract: The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness. Until now, no tool exists that exhaustively and speedily allows researchers to quantitatively evaluate explanations of neural network predictions. To increase transparency and reproducibility in the field, we therefo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…With respect to our literature analysis, we recognize a broad variety of different categorizations of evaluation levels and quality aspects [11,39,85,159,[163][164][165][166][167][168][169][170][171]. However, to provide a workable overview of the relevant evaluation options, we follow a common way of classifying evaluation levels according to whether user involvement is required (human-based) or not (computational) [171].…”
Section: Evaluation Phasementioning
confidence: 99%
See 2 more Smart Citations
“…With respect to our literature analysis, we recognize a broad variety of different categorizations of evaluation levels and quality aspects [11,39,85,159,[163][164][165][166][167][168][169][170][171]. However, to provide a workable overview of the relevant evaluation options, we follow a common way of classifying evaluation levels according to whether user involvement is required (human-based) or not (computational) [171].…”
Section: Evaluation Phasementioning
confidence: 99%
“…An important characteristic of an explanation is its robustness toward small changes in the input data. A good explanation is expected to be stable even when the input is slightly perturbed [159] because a user would expect similar input data to result in similar model behavior that can be explained in the same way. There are several implementations of such a concept, primarily for feature importance methods.…”
Section: Computational Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…Although current XAI techniques for deep learning architectures have created a step change in providing reasons for predicted results, the question of whether the explanations themselves can be trusted has been largely ignored. Some recent studies [41,8,20] have demonstrated the limitations of current XAI techniques. For instance, [41] applied three different XAI techniques on a CNN-based breast cancer classification model and found the techniques disagreed on the input features used for the predicted output and in some cases picked background regions that did not include the breast or the tumour as explanations.…”
Section: Introductionmentioning
confidence: 99%
“…Although current XAI techniques for deep learning architectures have created a step change in providing reasons for predicted results, the question of whether the explanations themselves can be trusted has been largely ignored. Some recent studies [22][23][24] have demonstrated the limitations of current XAI techniques. For instance, [22] applied three different XAI techniques on a CNNbased breast cancer classification model and found the techniques disagreed on the input features used for the predicted output and in some cases picked background regions that did not include the breast or the tumour as explanations.…”
Section: Introductionmentioning
confidence: 99%