2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI) 2021
DOI: 10.1109/sami50585.2021.9378686
|View full text |Cite
|
Sign up to set email alerts
|

Explaining Deep Neural Network using Layer-wise Relevance Propagation and Integrated Gradients

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…Given the availability of the actual knot core locations within knotted sequences, we conducted an evaluation to measure the model's capability to not only correctly classify an input sequence, but also to pinpoint the knot core's location. However, conventional interpretability techniques, like Layer Integrated Gradients (Cik et al, 2021 ), have limited applicability on biological data since they can only relate to individual input points (in our case amino acids), whereas protein folding requires the cooperation of groups of them simultaneously. Therefore, we proposed a patching technique: we monitored how the model's prediction changed after replacing a continuous segment of amino acids in the original sequence with X characters, with respect to the model's prediction of the original unpatched sequence.…”
Section: Models and Methodsmentioning
confidence: 99%
“…Given the availability of the actual knot core locations within knotted sequences, we conducted an evaluation to measure the model's capability to not only correctly classify an input sequence, but also to pinpoint the knot core's location. However, conventional interpretability techniques, like Layer Integrated Gradients (Cik et al, 2021 ), have limited applicability on biological data since they can only relate to individual input points (in our case amino acids), whereas protein folding requires the cooperation of groups of them simultaneously. Therefore, we proposed a patching technique: we monitored how the model's prediction changed after replacing a continuous segment of amino acids in the original sequence with X characters, with respect to the model's prediction of the original unpatched sequence.…”
Section: Models and Methodsmentioning
confidence: 99%
“…For model-agnostic FI, SHAP 1 and LIME 2 are stateof-the-art open-source FI frameworks [21], [22]. Other FI frameworks such as integrated gradients and layer-wise relevance propagation are specific to neural networks and mainly used for image datasets [23].…”
Section: B Xai Frameworkmentioning
confidence: 99%
“…Given the availability of the actual knot core locations within knotted sequences, we conducted an evaluation to measure the model's capability to not only correctly classify an input sequence, but also to pinpoint the knot core's location. However, conventional interpretability techniques, like Layer Integrated Gradients, 43 have limited applicability on biological data since they can only relate to individual input points (in our case amino acids), whereas protein folding requires the cooperation of groups of them simultaneously. Therefore, we proposed a patching technique: we monitored how the model's prediction changed after replacing a continuous segment of amino acids in the original sequence with X characters, with respect to the model's prediction of the original unpatched sequence.…”
Section: Interpretation With Patching Techniquementioning
confidence: 99%