2021
DOI: 10.1609/aaai.v35i13.17358
|View full text |Cite
|
Sign up to set email alerts
|

Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach

Abstract: Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 44 publications
(22 citation statements)
references
References 26 publications
0
22
0
Order By: Relevance
“…From this figure, we can observe that: (1) For the collected Micro-video dataset, users are still willing to watch videos even after they receive the disliked videos. This may be because the negative recommended videos are of low cost for users as they can easily skip the disliked videos, making no significant impact on their later preferred videos; (2) For the e-commerce dataset about Amazon, we can discover that when the source feedback is negative, the probability of target feedback being negative will increase sharply. This may be because the negative purchased items are of high cost in e-commerce for users as it will waste their money, increasing their unsatisfied emotion sharply.…”
Section: Visualization For Attention Weights Of Heads (Rq3)mentioning
confidence: 99%
See 2 more Smart Citations
“…From this figure, we can observe that: (1) For the collected Micro-video dataset, users are still willing to watch videos even after they receive the disliked videos. This may be because the negative recommended videos are of low cost for users as they can easily skip the disliked videos, making no significant impact on their later preferred videos; (2) For the e-commerce dataset about Amazon, we can discover that when the source feedback is negative, the probability of target feedback being negative will increase sharply. This may be because the negative purchased items are of high cost in e-commerce for users as it will waste their money, increasing their unsatisfied emotion sharply.…”
Section: Visualization For Attention Weights Of Heads (Rq3)mentioning
confidence: 99%
“…For example, L2X [3] exploits Gumbel-softmax [13] for feature selection by instance, with its hard attention design [39]. Moreover, VIBI [2] further propose a feature score constraint in a global prior so as to simplify and purify the explainable representation learning. As self-attention is popular [6,28], there is also a work that explains what heads learn and concludes that some redundant heads can be pruned [30].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The most suitable baselines to benchmark its fidelity are post-hoc methods that approximate the classifier over input space with a single surrogate model. We select two state-of-the-art systems, FLINT [23] and VIBI [58]. A variant of our own proposed method, L2I w/ Θ MAX , is also evaluated.…”
Section: Evaluating Interpretationsmentioning
confidence: 99%
“…While most of the recent efforts in interpretability are focused on developing new methods for computing local explanations (e.g., using game theory theorems [129] , information theory principles [130] , or a case-based reasoning approach [131] ), one of the biggest challenges is evaluating both the faithfulness of extracted knowledge and the usefulness of such knowledge for humans to understand the black-box model′s decision process [132] .…”
Section: Current Challengesmentioning
confidence: 99%