2023
DOI: 10.46604/peti.2023.10101
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of Local Interpretable Model-Agnostic Explanation and Shapley Additive Explanation for Chronic Heart Disease Detection

Abstract: This study aims to investigate the effectiveness of local interpretable model-agnostic explanation (LIME) and Shapley additive explanation (SHAP) approaches for chronic heart disease detection. The efficiency of LIME and SHAP are evaluated by analyzing the diagnostic results of the XGBoost model and the stability and quality of counterfactual explanations. Firstly, 1025 heart disease samples are collected from the University of California Irvine. Then, the performance of LIME and SHAP is compared by using the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…Calculating the contribution of each feature in the model is a challenging task. We employ the Local Interpretable Model-Agnostic Explanation (LIME) and Shapley Additive Explanation Algorithm (SHAP) [ 48 ] in our research to provide explanations for LPI-MFF. These methods investigate the contribution of the extracted features by visualizing the high contributory features from the whole feature set using machine learning algorithms [ 49 ].…”
Section: Resultsmentioning
confidence: 99%
“…Calculating the contribution of each feature in the model is a challenging task. We employ the Local Interpretable Model-Agnostic Explanation (LIME) and Shapley Additive Explanation Algorithm (SHAP) [ 48 ] in our research to provide explanations for LPI-MFF. These methods investigate the contribution of the extracted features by visualizing the high contributory features from the whole feature set using machine learning algorithms [ 49 ].…”
Section: Resultsmentioning
confidence: 99%
“…Considering simpler and more interpretable models like Decision Trees or Logistic Regression, especially when clinical decision-making requires transparency. Employ local interpretable models like Local Interpretable Model-Agnostic Explanations (LIME) or SHapley Additive exPlanations (SHAP), which is a game theoretic approach, to explain individual predictions [192][193][194][195][196][197][198]. These models provide explanations and visualizations for specific instances, which can be valuable in healthcare decision-making since they also help healthcare professionals to understand model outputs [199,200].…”
Section: Model Interpretability and Explainabilitymentioning
confidence: 99%
“…The proposed block scaling approach is inspired by the concept of "local interpretable model-agnostic explanations" (LIME) [22]. The LIME approach has been successfully applied to applications such as chronic heart disease detection [23].…”
Section: Bsq Approachmentioning
confidence: 99%