2021
DOI: 10.3390/app12010136
|View full text |Cite
|
Sign up to set email alerts
|

Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation

Abstract: Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(10 citation statements)
references
References 53 publications
(78 reference statements)
1
9
0
Order By: Relevance
“…While hybrid models are powerful, their complicated structures make them difficult to interpret. Model predictions can be attributed back to input features using techniques such as Layer-wise Relevance Propagation (LRP) and Shapley Additive Explanations (SHAP) to facilitate explainable AI [100].…”
Section: 4 Discussion and Practical Hintsmentioning
confidence: 99%
“…While hybrid models are powerful, their complicated structures make them difficult to interpret. Model predictions can be attributed back to input features using techniques such as Layer-wise Relevance Propagation (LRP) and Shapley Additive Explanations (SHAP) to facilitate explainable AI [100].…”
Section: 4 Discussion and Practical Hintsmentioning
confidence: 99%
“…Despite its superior predictive performance, deep learning is often criticized for its poor model interpretability. To overcome this limitation, deep learning algorithms that focus on model explanations have emerged in recent years ( Lundberg and Lee, 2017 ; Ribeiro et al., 2016 ; Ullah et al., 2020 ).…”
Section: Machine Learning For Multi-omics Integrationmentioning
confidence: 99%
“…SHAP presents a unified approach to model prediction. SHAP uses a combination of Local Interpretable Model agnostic exPlanations (LIME) [20], Deep Lift [21], Layer-wise Relevance Propagation [22], and Classic Shapely Value Estimation. Shapely Values are designed for Shapely Regression, Shapely Sampling, and Quantitative Input Influence on features using LIME, Deep Lift, and Layer-wise Relevance Propagation.…”
Section: Shapely Additive Explanations (Shap) [18] [19]mentioning
confidence: 99%