2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020
DOI: 10.1109/wacv45572.2020.9093318
|View full text |Cite
|
Sign up to set email alerts
|

Is Pruning Compression?: Investigating Pruning Via Network Layer Similarity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 7 publications
1
7
0
Order By: Relevance
“…On another note, pruning is argued to be important since one cannot achieve the same adversarial robustness or performance by training from scratch [69]. In addition, a recent study [7] confirms the observations of Liu et al [45] and studies the differences between the representations of the pruned nets and vanilla nets.…”
Section: Related Worksupporting
confidence: 68%
See 1 more Smart Citation
“…On another note, pruning is argued to be important since one cannot achieve the same adversarial robustness or performance by training from scratch [69]. In addition, a recent study [7] confirms the observations of Liu et al [45] and studies the differences between the representations of the pruned nets and vanilla nets.…”
Section: Related Worksupporting
confidence: 68%
“…In the literature, the de facto standard of assessing neuron importance is to check the average absolute value of its weights which is an acceptable proposition for fully connected neural nets [23,22,66,19,41,64,67,63,69,7]. However, for visual recognition, convolutional neural nets (CNNs) [13,37] have already surpassed the fully connected nets in terms of both accuracy and efficiency.…”
Section: Introductionmentioning
confidence: 99%
“…In mathematics, the rank of a smooth function measures the volume of independent information captured by the function [21]. Deep neural networks are highly smooth functions, thus the rank of a network has long been an essential concept in machine learning that underlies many tasks such as information compression [48,56,36,54,49], network pruning [32,55,5,25,9], data mining [6,24,10,57,18,29], computer vision [59,58,31,27,29,60], and natural language processing [8,28,7,11]. Numerous methods are either designed to utilize the mathematical property of network ranks, or are derived from an assumption that low-rank structures are to be preferred.…”
Section: Introductionmentioning
confidence: 99%
“…Previous studies [3,17] have shown that pruned neural networks evolve to substantially different representations while striving to preserve overall accuracy. In Section 3, we have demonstrated that knowledge distillation can effectively mitigate both pruning and data induced bias in compressed networks.…”
Section: Explaining Model Bias Using Model Similaritymentioning
confidence: 99%
“…Movva and Zhao [17] investigated the impact of pruning on layer similarities of NLP models using LinearCKA [12]. Ansuini et al and Blakeney et al also investigated how pruning can change representations using similarity based measures [2,3]. Unfortunately, very few work have studied how pruning can induce bias and how to evaluate and mitigate the pruning induced bias.…”
Section: Related Workmentioning
confidence: 99%