2020
DOI: 10.48550/arxiv.2009.11094
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot

Abstract: Network pruning is a method for reducing test-time computational resource requirements with minimal performance degradation. Conventional wisdom of pruning algorithms suggests that: (1) Pruning methods exploit information from training data to find good subnetworks; (2) The architecture of the pruned network is crucial for good performance. In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 20 publications
0
16
0
Order By: Relevance
“…In the high-sparsity regime we are interested in, and by significantly redesigning the algorithm of Ramanujan et al (2020), we can find rare gems that can be finetuned to competitive accuracy to IMP with warm-up. Furthermore, our rare gems beat all presented baselines by Frankle et al (2020b); Su et al (2020). In Fig.…”
Section: Introductionmentioning
confidence: 61%
See 3 more Smart Citations
“…In the high-sparsity regime we are interested in, and by significantly redesigning the algorithm of Ramanujan et al (2020), we can find rare gems that can be finetuned to competitive accuracy to IMP with warm-up. Furthermore, our rare gems beat all presented baselines by Frankle et al (2020b); Su et al (2020). In Fig.…”
Section: Introductionmentioning
confidence: 61%
“…Their intuition is that this will lead to masks at initialization that are more amenable to training to high accuracy within a few steps. The authors get promising results, but do not compare extensively against the sanity checks of Frankle et al (2020b) and Su et al (2020), and it is unclear how their scheme (ProsPr) and our presented work compare. In a future version of this manuscript we plan to offer a thorough comparison with ProsPr.…”
Section: Related Workmentioning
confidence: 94%
See 2 more Smart Citations
“…To close with a few pointers to the literature, as Lemma 3.2 is essentially a pruning bound, it is potentially of independent interest; see for instance the literature on lottery tickets and pruning (Frankle and Carbin, 2019;Frankle et al, 2020;Su et al, 2020). Secondly, there is already one generalization bound in the literature which exhibits spectral norms, due to Suzuki et al (2019); unfortunately, it also has an explicit dependence on network width.…”
Section: Direct Uniform Convergence Approach In Theorem 14mentioning
confidence: 99%