2020
DOI: 10.1007/978-3-030-65847-2_4
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 14 publications
0
9
0
Order By: Relevance
“…The SHAP value method is a means of interpreting the output of a given machine learning model. Moreover, SHAP can ensure consistency and, when applied to a particular element, can be quantified as the mean of all the significant contributions to the prediction across all possible permutations of the elements …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The SHAP value method is a means of interpreting the output of a given machine learning model. Moreover, SHAP can ensure consistency and, when applied to a particular element, can be quantified as the mean of all the significant contributions to the prediction across all possible permutations of the elements …”
Section: Methodsmentioning
confidence: 99%
“…Moreover, SHAP can ensure consistency and, when applied to a particular element, can be quantified as the mean of all the significant contributions to the prediction across all possible permutations of the elements. 52 SHAP is implemented as a Python toolkit for interpretable machine learning and provides a set of Shapley values for every feature in the data set. 53 SHAP determines the effect of each of the input features in terms of positive and negative contribution with a SHAP value for each observation.…”
Section: Shapley Additive Explanations (Shaps)mentioning
confidence: 99%
“…In this section, we describe a method in which local SHAP feature importance can be verified. This method as suggested by Sharma et al 32 to perturb a random attribute by a small percentage and find the absolute difference between the SHAP values of the attributes of each example before and after perturbation. The highest absolute difference must be in the perturbed attribute.…”
Section: Validation Of Shap Methodologymentioning
confidence: 99%
“…• Boruta: evaluates the importance of each variable with respect to a permuted version of it to determine its relevance (Kursa et al, 2010). • Shap RFE: is a modified version of RFE that identifies more robustly the importance of each variable using SHAP (see Section 2.6) (Lundberg et al, 2019;Sharma et al, 2020).…”
Section: Step 5: Model Improvementmentioning
confidence: 99%
“…We propose using SHAP (SHapley Additive exPlanations) (Lundberg et al, 2019), because this method allows us to estimate the contribution of each variable to individual predictions in a robust, consistent, and locally accurate way (Lundberg et al, 2018). It uses an optimized procedure for tree-based algorithms allowing interpreting and debugging the resultant model (Sharma et al, 2020;Yoshida, 2020).…”
Section: Step 6: Explaining the Modelmentioning
confidence: 99%