2018
DOI: 10.48550/arxiv.1808.02610
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data

Abstract: We study instancewise feature importance scoring as a method for model interpretation. Any such method yields, for each predicted instance, a vector of importance scores associated with the feature vector. Methods based on the Shapley score have been proposed as a fair way of computing feature attributions of this kind, but incur an exponential complexity in the number of features. This combinatorial explosion arises from the definition of the Shapley value and prevents these methods from being scalable to lar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(43 citation statements)
references
References 7 publications
0
43
0
Order By: Relevance
“…Finally, we carry out the mask-k-pixels experiments (Chen et al, 2018b) to demonstrate the equivariance of SITE as a self-interpretable model. This experiment is implemented by masking k pixels of the input data based on the interpretations provided.…”
Section: Comparison With Post-hoc Methodsmentioning
confidence: 99%
“…Finally, we carry out the mask-k-pixels experiments (Chen et al, 2018b) to demonstrate the equivariance of SITE as a self-interpretable model. This experiment is implemented by masking k pixels of the input data based on the interpretations provided.…”
Section: Comparison With Post-hoc Methodsmentioning
confidence: 99%
“…aggregation based methods [28] and Monte Carlo sampling [29]. There are also approaches for graph-structured data such as natural language text and images [30].…”
Section: Feature Importance Methodsmentioning
confidence: 99%
“…Selective rationalization: [27] proposes the first generator-predictor framework for rationalization. Following this work, new game-theoretic frameworks were proposed to encourage different desired properties of the selected rationales, such as optimized Shapley structure scores [14], comprehensiveness [46], multi-aspect supports [4,11] and invariance [12]. Another fundamental direction is to overcome the training difficulties.…”
Section: Related Workmentioning
confidence: 99%
“…Selective rationalization [8,10,11,13,14,17,27,29,46] explains the prediction of complex neural networks by finding a small subset of the input -rationale -that suffices on its own to yield the same outcome as to the original data. To generate high-quality rationales, existing methods often train a cascaded system that consists of two components, i.e., a rationale generator and a predictor.…”
Section: Introductionmentioning
confidence: 99%