2021 IEEE XVIIth International Conference on the Perspective Technologies and Methods in MEMS Design (MEMSTECH) 2021
DOI: 10.1109/memstech53091.2021.9468078
|View full text |Cite
|
Sign up to set email alerts
|

Model Explainability using SHAP Values for LightGBM Predictions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 8 publications
0
3
0
Order By: Relevance
“…To facilitate the application of the prediction models, we conducted feature reduction by illustrating the changes in the prediction accuracy of the models with different numbers of input features (ie, those with top 10, 20, 50, and 100 SHAP values) [ 34 , 35 ]. As shown in Figure S3 of Multimedia Appendix 1 , the models for predicting suicidal behaviors within 1 year and from 1 to 6 years both achieved overall good performance when the input feature dimension with the highest SHAP value was increased to 20, so we considered the models with 20 input features as the applicable prediction models which might facilitate the future implication.…”
Section: Methodsmentioning
confidence: 99%
“…To facilitate the application of the prediction models, we conducted feature reduction by illustrating the changes in the prediction accuracy of the models with different numbers of input features (ie, those with top 10, 20, 50, and 100 SHAP values) [ 34 , 35 ]. As shown in Figure S3 of Multimedia Appendix 1 , the models for predicting suicidal behaviors within 1 year and from 1 to 6 years both achieved overall good performance when the input feature dimension with the highest SHAP value was increased to 20, so we considered the models with 20 input features as the applicable prediction models which might facilitate the future implication.…”
Section: Methodsmentioning
confidence: 99%
“…SHAP is a model additive explanation approach from cooperative game theory. The method presents and explains the prediction concerning the contribution of each feature to the predicted value (Bugaj et al, 2021). Being a model-agnostic methodology, SHAP can explain individual predictions without being limited to a specific machine-learning model.…”
Section: Shap Valuesmentioning
confidence: 99%
“…To achieve this, the use of LightGBM (Light Gradient Boosted Machine) data analysis technique for bank marketing can be a solution to enhance the effectiveness of marketing campaign strategies. LightGBM is a gradient boosting framework based on decision trees developed by Microsoft in 2017 [2]. In terms of CPU execution time and accuracy, LightGBM outperforms other gradient boosting methods significantly [3], [4].…”
Section: Introductionmentioning
confidence: 99%