Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/742
|View full text |Cite
|
Sign up to set email alerts
|

Generating Natural Counterfactual Visual Explanations

Abstract: Counterfactual explanations help users to understand the behaviors of machine learning models by changing the inputs for the existing outputs. For an image classification task, an example counterfactual visual explanation explains: "for an example that belongs to class A, what changes do we need to make to the input so that the output is more inclined to class B." Our research considers changing the attribute description text of class A on the basis of the attributes of class B and generating counterfa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 2 publications
0
4
0
Order By: Relevance
“…More commonly, humans are engaged to evaluate the effectiveness of methods in generating explanations and their usefulness in real scenarios [72][73][74][75]. Zhao et al [73] employed Generative Adversarial Networks (GANs) to generate counterfactual visual explanations. Crowd workers were recruited to evaluate their effectiveness for classification.…”
Section: Evaluation Of Explainability Methods By Means Of Human Knowl...mentioning
confidence: 99%
“…More commonly, humans are engaged to evaluate the effectiveness of methods in generating explanations and their usefulness in real scenarios [72][73][74][75]. Zhao et al [73] employed Generative Adversarial Networks (GANs) to generate counterfactual visual explanations. Crowd workers were recruited to evaluate their effectiveness for classification.…”
Section: Evaluation Of Explainability Methods By Means Of Human Knowl...mentioning
confidence: 99%
“…However, as suggested in [39], the processing time overhead is not significant for all counterfactual explanation applications. In fact, the rapid development of one or more counterfactual explanations is relevant for only certain applications that require an immediate response such as machine teaching, where explanation algorithms need to perform in real-time, and in low-complexity platforms like mobile devices [44].…”
Section: ) Compirsion Of Processing Timementioning
confidence: 99%
“…Singla & Pollack [87] sample instances that vary the prediction probability to navigate through the manifold of the counterfactuals. Zhao [89] proposes using a Star-GAN [146] to generate robust counterfactuals faster. However, it is to be noted that the generative models employed to learn the underlying distribution are, again, black boxes whose working is unknown.…”
Section: Counterfactual Explanationsmentioning
confidence: 99%