2022
DOI: 10.3390/a15110431
|View full text |Cite
|
Sign up to set email alerts
|

Ensembles of Random SHAPs

Abstract: The ensemble-based modifications of the well-known SHapley Additive exPlanations (SHAP) method for the local explanation of a black-box model are proposed. The modifications aim to simplify the SHAP which is computationally expensive when there is a large number of features. The main idea behind the proposed modifications is to approximate the SHAP by an ensemble of SHAPs with a smaller number of features. According to the first modification, called the ER-SHAP, several features are randomly selected many time… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 65 publications
0
3
0
Order By: Relevance
“…Finally, the eighth paper is entitled "Ensembles of random SHAPs" and it is authored by Utkin and Konstantinov [11]. In this work, the authors proposed three ensemblebased modifications to the SHapley Additive exPlanations (SHAP) method for the local explanation of a black-box model, called ER-SHAP, ERW-SHAP, and ER-SHAP-RF.…”
Section: Ensemble Learning And/or Explainabilitymentioning
confidence: 99%
“…Finally, the eighth paper is entitled "Ensembles of random SHAPs" and it is authored by Utkin and Konstantinov [11]. In this work, the authors proposed three ensemblebased modifications to the SHapley Additive exPlanations (SHAP) method for the local explanation of a black-box model, called ER-SHAP, ERW-SHAP, and ER-SHAP-RF.…”
Section: Ensemble Learning And/or Explainabilitymentioning
confidence: 99%
“…Various modifications of SHAP have been developed to explain different machine learning models and tools [46,47,48,49,50,51,52]. Applications of SHAP can be found in [53,54,55], Approaches to reduce the computational complexity of SHAP were also proposed in [56,57,58,59,60]. Many interpretation methods and their comparison were considered and studied in survey papers [61,62,63,64,65,66,67,68] in detail.…”
Section: Related Workmentioning
confidence: 99%
“…Value φ is calculated from the efficiency property as Thus, we need first to find values for all features for which the explained example is inside the rectangle using (58), and then using (60) to calculate Shapley values for the remaining features. In order to obtain Shapley values for an arbitrary HRBM ensemble, for example, boosting, we use the third property (linearity) of Shapley values.…”
Section: Algorithm 5 the Prediction Algorithmmentioning
confidence: 99%