2021
DOI: 10.48550/arxiv.2107.07436
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FastSHAP: Real-Time Shapley Value Estimation

Abstract: Shapley values are widely used to explain black-box models, but they are costly to calculate because they require many model evaluations. We introduce FastSHAP, a method for estimating Shapley values in a single forward pass using a learned explainer model. FastSHAP amortizes the cost of explaining many inputs via a learning approach inspired by the Shapley value's weighted least squares characterization, and it can be trained using standard stochastic gradient optimization. We compare FastSHAP to existing est… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 25 publications
0
6
0
Order By: Relevance
“…SHAP (SHapley Additive exPlanations) values were calculated using the fastshap ver.0.1.0 [73] R package, based on the nestedcv results. This was achieved through the fastshap::explain function, with the nsim = 10 parameter, evaluated independently for the 100 nestcv.train iterations.…”
Section: Classification Of Datamentioning
confidence: 99%
“…SHAP (SHapley Additive exPlanations) values were calculated using the fastshap ver.0.1.0 [73] R package, based on the nestedcv results. This was achieved through the fastshap::explain function, with the nsim = 10 parameter, evaluated independently for the 100 nestcv.train iterations.…”
Section: Classification Of Datamentioning
confidence: 99%
“…A broader range of neighbourhood contexts will be explored to refine the effectiveness of XAI for an explanandum. We will also explore optimisation techniques that can be used with XAI frameworks for generating explanations faster [42].…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…Various modifications of SHAP have been developed to explain different machine learning models and tools [46,47,48,49,50,51,52]. Applications of SHAP can be found in [53,54,55], Approaches to reduce the computational complexity of SHAP were also proposed in [56,57,58,59,60]. Many interpretation methods and their comparison were considered and studied in survey papers [61,62,63,64,65,66,67,68] in detail.…”
Section: Related Workmentioning
confidence: 99%