2022
DOI: 10.48550/arxiv.2207.14160
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional Benchmark

Abstract: In recent years, Explainable AI (xAI) attracted a lot of attention as various countries turned explanations into a legal right. xAI allows for improving models beyond the accuracy metric by, e.g., debugging the learned pattern and demystifying the AI's behavior. The widespread use of xAI brought new challenges. On the one hand, the number of published xAI algorithms underwent a boom, and it became difficult for practitioners to select the right tool. On the other hand, some experiments did highlight how easy d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…For such comparisons, XAI Toolsheets adopt 22 dimensions falling under three major categories, namely metadata, utility, and usability. Aside from documentations, Belaid et al [192] introduce Compare-xAI, a unified benchmark, with multiple use-cases, indexing +16 post-hoc xAI algorithms, +22 tests, and +40 research paper. Through Compare-xAI, practitioners and data scientists can gain insights on which XAI methods are relevant to their problems.…”
Section: Xai Methods Selectionmentioning
confidence: 99%
“…For such comparisons, XAI Toolsheets adopt 22 dimensions falling under three major categories, namely metadata, utility, and usability. Aside from documentations, Belaid et al [192] introduce Compare-xAI, a unified benchmark, with multiple use-cases, indexing +16 post-hoc xAI algorithms, +22 tests, and +40 research paper. Through Compare-xAI, practitioners and data scientists can gain insights on which XAI methods are relevant to their problems.…”
Section: Xai Methods Selectionmentioning
confidence: 99%
“…XAI often concerns purely computational approaches but explainability can range from writing algorithms for XAI to publishing and describing a model in an algorithmic registry to stress-testing a model with different data as seen in AI auditing (Costanza-Chock et al, 2022;Gilpin et al, 2018). AI itself can be viewed as a kind of explanation; dimension reduction algorithms (e.g., linear discriminant analysis) can suggest the importance of certain input variables since they can collapse the number of variables while preserving their "essence" (Belaid et al, 2022). This extension of explanation blurs the boundary between the explainability and interpretability within XAI (Gunning et al, 2019).…”
Section: G Eog R Aphi C Appli C Ati On S Of X Ai Me Thods: S Tate-of-...mentioning
confidence: 99%
“…Beyond these challenges, we must ask what constitutes explainability as it spans transparency demands of plain text as a form of explanation to computational remedies for training data, hyperparameter tuning and layer restructuring. We considered how XAI could bypass an explanation of DNN outcomes to directly provide insights about the data (Belaid et al, 2022). We discussed the need for semantic understanding, not only to introduce greater geographic semantics and ontologies into the underlying DNN but also to account for a lack of definitional overlap between the fields of geography and AI.…”
Section: Con Clus Ionmentioning
confidence: 99%
“…Another tool for evaluating the XAI explainers is CompareXAI [55] which uses metrics including comprehensibility, portability, and average execution time. Further contribution includes the Local Explanation Evaluation Framework (LEAF) [56] which can evaluate the explanation produced by SHAP and LIME with respect to stability, local concordance, fidelity, and prescriptivity.…”
Section: Related Workmentioning
confidence: 99%