2019
DOI: 10.48550/arxiv.1911.11081
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Feature Attribution through Input-specific Network Pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…However, the perturbation-based XAI methods have the challenge of combinatorial complexity explosion. This happens when one attempts to go through all elements of the input and all their possible combinations to observe how each of them would affect the output [62]. The possible combinations of data perturbations increase dramatically when dealing with 3D images, causing a significant increase in computational costs.…”
Section: B Explainable Methodsmentioning
confidence: 99%
“…However, the perturbation-based XAI methods have the challenge of combinatorial complexity explosion. This happens when one attempts to go through all elements of the input and all their possible combinations to observe how each of them would affect the output [62]. The possible combinations of data perturbations increase dramatically when dealing with 3D images, causing a significant increase in computational costs.…”
Section: B Explainable Methodsmentioning
confidence: 99%
“…If it were to iterate over all input elements and all possible combinations of them and observe how each one changes the output, this would be impractical in practice due to enormous cost constraints. Gradient-based attribution methods calculate the importance of elements by using model gradients, but tend to generate noise, especially in particularly large networks [13]. In addition, if the backpropagation algorithm is used, it needs to access the internal information of the model to generate the explanation, so it will be limited in use.…”
Section: Perturbation-based Saliency Methodsmentioning
confidence: 99%
“…Others address this problem by multi-task learning [10,26] that used selective instances for training on imbalanced sets for each task. Interpreting Neural Networks: Two principal neural network interpretation approaches are feature attribution [29,32,13,30,22,17,12] (i.e. saliency methods [4]) and analyzing internal units (e.g.…”
Section: Related Workmentioning
confidence: 99%