2022
DOI: 10.11591/eei.v11i2.3391
|View full text |Cite
|
Sign up to set email alerts
|

Extraction of human understandable insight from machine learning model for diabetes prediction

Abstract: Explaining the reason for model’s output as diabetes positive or negative is crucial for diabetes diagnosis. Because, reasoning the predictive outcome of model helps to understand why the model predicted an instance into diabetes positive or negative class. In recent years, highest predictive accuracy and promising result is achieved with simple linear model to complex deep neural network. However, the use of complex model such as ensemble and deep learning have trade-off between accuracy and interpretability.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…In different application domains, researchers have explored the interpretability of deep models. In the medical field, Assegie et al [14] adopted LIME and SHAP to rank the importance of features and explain the model's output on whether a patient is diabetic or not. To provide clinical doctors with a clear understanding of the classification criteria utilized by GNN in Alzheimer's disease prediction, Anjomshoae et al [15] proposed a single node classification explanation method.…”
Section: Intention Attribution and Explanation For Social Crisis Eventsmentioning
confidence: 99%
“…In different application domains, researchers have explored the interpretability of deep models. In the medical field, Assegie et al [14] adopted LIME and SHAP to rank the importance of features and explain the model's output on whether a patient is diabetic or not. To provide clinical doctors with a clear understanding of the classification criteria utilized by GNN in Alzheimer's disease prediction, Anjomshoae et al [15] proposed a single node classification explanation method.…”
Section: Intention Attribution and Explanation For Social Crisis Eventsmentioning
confidence: 99%
“…Lundberg developed the G-DeepShap method for explaining complex machine learning models and tested its comprehensive performance evaluation on biological, medical and financial datasets [35]. In [36], a study was conducted on individual parameters that affect the prediction of diabetes in patients. Authors used various combinations (LIME -local interpreted explanations and SHAP).…”
Section: Literature Reviewmentioning
confidence: 99%
“…[51] use the Pedigree Diabetes Function variable together with others with the Ensemble Machine Learning Techniques. [52] consider the positive role of diabetes pedigree function and weight in determining diabetes with machine learning algorithms. [53] uses the Pedigree Function Diabetes variable along with other variables to predict diabetes using fuzzy methods.…”
Section: Literature Reviewmentioning
confidence: 99%