2020
DOI: 10.1016/j.procs.2020.02.255
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI: A Hybrid Approach to Generate Human-Interpretable Explanation for Deep Learning Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(14 citation statements)
references
References 2 publications
0
14
0
Order By: Relevance
“…e explainability and the transparency of the decisions are exceptionally important in the healthcare models. Regarding human-interpretable explanations [33] through a hybrid model of network hidden layer with TREPAN decision tree [34], the model can produce higher-quality reason codes that provide concise, human-interpretable reasons for model outcomes in the instance stage.…”
Section: Resultsmentioning
confidence: 99%
“…e explainability and the transparency of the decisions are exceptionally important in the healthcare models. Regarding human-interpretable explanations [33] through a hybrid model of network hidden layer with TREPAN decision tree [34], the model can produce higher-quality reason codes that provide concise, human-interpretable reasons for model outcomes in the instance stage.…”
Section: Resultsmentioning
confidence: 99%
“…The eXplainable artificial intelligence (XAI) is experiencing a rapid transformation due to recent deep learning advances, where many earlier unsolved obstacles have become step by step solvable. The latest progress proved that applications with real-world complexity could gradually be interpretable (Kuo et al 2019;Rudin 2019;De et al 2020;Nguyen et al 2020a). Nevertheless, XAI is a young topic that attracts growing interest, and the number of published articles rises quickly.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, the approach preserved the accuracy, sensitivity, specificity with a shallower structure, which the trainable parameters were reduced by 20%. Recently, a hybrid model that combines two prior algorithms, which include the TREPAN decision tree and the clustering of a hidden layer representation, was proposed to deconstruct a deep learning network (De et al 2020). The proposed model aimed to visualize the information flow of an underlying model to make it comprehensible to humans.…”
Section: Hybrid Interpretable Modelmentioning
confidence: 99%
“…Generalized Linear Models (GLM) provide meaningful, clear, and accessible feature importance that indicates the relative importance of each feature when making a prediction for the regression models. Outputs of regression models are a linear combination of features with different weights depending on the significance of features [35].…”
Section: Explainable Aimentioning
confidence: 99%