2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01332
|View full text |Cite
|
Sign up to set email alerts
|

Neural Response Interpretation through the Lens of Critical Pathways

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 24 publications
0
16
0
Order By: Relevance
“…FullGrad leverages the activation, gradient and, bias values from all layers. PathwayGrad [13] leverages critical pathways (pathway important neurons).…”
Section: Explaining Predictions Via Feature Attributionmentioning
confidence: 99%
See 1 more Smart Citation
“…FullGrad leverages the activation, gradient and, bias values from all layers. PathwayGrad [13] leverages critical pathways (pathway important neurons).…”
Section: Explaining Predictions Via Feature Attributionmentioning
confidence: 99%
“…The problem is called feature attribution [16,35], and the solutions are commonly known as explanation, attribution, or saliency methods. There is an extensive list of explanation methods in the literature [8,9,13,15,16,19,26,27,29,32,33,35,39,41]. One peculiar observation is that these solutions point to different features as being important.…”
Section: Introductionmentioning
confidence: 99%
“…Others address this problem by multi-task learning [10,26] that used selective instances for training on imbalanced sets for each task. Interpreting Neural Networks: Two principal neural network interpretation approaches are feature attribution [29,32,13,30,22,17,12] (i.e. saliency methods [4]) and analyzing internal units (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…For post-hoc explanation of chest X-ray models, many works opt for feature attribution methodologies [21,23,1,18,9,25]. These works use feature attribution methods, such as Class Activation Maps (CAM) [25,18] to reveal which input regions are contributing to the output prediction.…”
Section: Introductionmentioning
confidence: 99%