2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00029
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting Random Channel Pruning for Neural Network Compression

Abstract: Channel (or 3D filter) pruning serves as an effective way to accelerate the inference of neural networks. There has been a flurry of algorithms that try to solve this practical problem, each being claimed effective in some ways. Yet, a benchmark to compare those algorithms directly is lacking, mainly due to the complexity of the algorithms and some custom settings such as the particular network configuration or training procedure. A fair benchmark is important for the further development of channel pruning.Mea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 78 publications
(26 citation statements)
references
References 56 publications
0
26
0
Order By: Relevance
“…It can be inferred that this method has a lower error rate than every method, and this is achieved while keeping the FLOPs relatively low. The FLOPs, as compared to the baseline network [33], reduced by 52.55%, but the error has only increased by 0.84%. When compared with standard random pruning, for a similar reduction in FLOPs, there is a 0.63% lower error.…”
Section: Methodsmentioning
confidence: 90%
See 3 more Smart Citations
“…It can be inferred that this method has a lower error rate than every method, and this is achieved while keeping the FLOPs relatively low. The FLOPs, as compared to the baseline network [33], reduced by 52.55%, but the error has only increased by 0.84%. When compared with standard random pruning, for a similar reduction in FLOPs, there is a 0.63% lower error.…”
Section: Methodsmentioning
confidence: 90%
“…Baseline [33] 23.40% -4110M GAL-0.5 [37] 28.05% 9.06% 2341M SSS [25] 28.18% 9.21% 2341M HRank [36] 25.02% 7.67% 2311M Random Pruning [33] 24.87% 7.48% 2013M AutoPruner [43] 25.24% 7.85% 2005M Adapt-DCP [38] 24.85% 7.70% 1955M MetaPruning [40] 24.60% -2005M Rewarded meta-pruning 24.24% 7.35% 1950M…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…A study for random channel pruning [ 31 ] has been proposed to benchmark the channel pruning methods so far. This study showed that repeating randomly selecting partial channels during the training phase achieved the best result.…”
Section: Related Workmentioning
confidence: 99%