2021
DOI: 10.1016/j.artint.2020.103404
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating XAI: A comparison of rule-based and example-based explanations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
121
3

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 226 publications
(148 citation statements)
references
References 24 publications
3
121
3
Order By: Relevance
“…The research field of explainable AI (XAI) tackles the black box problem by introducing transparent models as well as techniques for generating different types of explanations for black box models (Adadi & Berrada, 2018;Arrieta et al, 2020;. Consequently, modern AIbased decision support systems (DSSs) can provide powerful decision support while also explaining the outcome via user interfaces (UIs) (Lamy et al, 2019;van der Waa et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The research field of explainable AI (XAI) tackles the black box problem by introducing transparent models as well as techniques for generating different types of explanations for black box models (Adadi & Berrada, 2018;Arrieta et al, 2020;. Consequently, modern AIbased decision support systems (DSSs) can provide powerful decision support while also explaining the outcome via user interfaces (UIs) (Lamy et al, 2019;van der Waa et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, XAI can help monitor and ensure the fairness and transparency of AI-based systems, improve the management of such systems or support the maintenance of faulty systems (Kim et al, 2020;Tschandl et al, 2020). Despite active research in this context, there is a lack of user evaluation studies in the XAI field regarding the perception and effects of explanations on the targeted stakeholders (van der Waa et al, 2021). Moreover, different explanation goals and information needs, as well as varying backgrounds and/or expertise, can influence users' perceptions of XAI-based explanations, which further underlines the relevance of evaluations with targeted users (Barda et al, 2020;van der Waa et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…One of our research goals is to highlight the models' explainability in smart manufacturing processes, aligning XAI technologies with human interaction. We also aim to collect feedback on the quality of such explanations since there are few validated measurements for user evaluations on explanations' quality [64].…”
Section: Explainable Artificial Intelligencementioning
confidence: 99%
“…One of our research goals is to highlight the models' explainability in smart manufacturing processes, aligning XAI technologies with human interaction. We also aim to collect feedback on the quality of such explanations since there are few validated measurements for user evaluations on explanations' quality [65].…”
Section: Explainable Artificial Intelligencementioning
confidence: 99%