2020
DOI: 10.1109/jstsp.2020.2975987
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative Layer Pruning for Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(23 citation statements)
references
References 27 publications
0
23
0
Order By: Relevance
“…Results have shown that memory reduction of 4.32MB, 3.69MB and 3.9MB occurred when system is tested on CIFAR for VGG-16, GoogleNet and ResNet respectively in 200 epochs with accuracy drop under 3%. A layer pruning method is presented in [22]. It prunes least important layer by using Partial Least Square (PLS) projection as estimation criterion.…”
Section: A Network Compression Using Pruning Methodsmentioning
confidence: 99%
“…Results have shown that memory reduction of 4.32MB, 3.69MB and 3.9MB occurred when system is tested on CIFAR for VGG-16, GoogleNet and ResNet respectively in 200 epochs with accuracy drop under 3%. A layer pruning method is presented in [22]. It prunes least important layer by using Partial Least Square (PLS) projection as estimation criterion.…”
Section: A Network Compression Using Pruning Methodsmentioning
confidence: 99%
“…To this end, a line of research called channel pruning has been introduced to remove unnecessary channels in CNNs to accelerate the wall-clock time in the edge device while preserving the performance of the CNNs (Wen et al, 2016;Tiwari et al, 2021;Shen et al, 2022). However, with the advancement of hardware technology for parallel computation, channel pruning which reduces the width of neural networks has become less effective than removing entire layers in terms of latency (Jordao et al, 2020;Chen & Zhao, 2018;Xu et al, 2020;.…”
Section: Preprintmentioning
confidence: 99%
“…Depth Reduction There are two lines of research that reduce the depth of neural networks: layer-pruning and depth compression. In layer pruning, Jordao et al (2020) and Chen & Zhao (2018) evaluate the importance of layers by the amount of discriminative information in each feature map. In depth compression, DepthShrinker points out the inefficiency of depth-wise convolutions during inference in the edge device and proposes a depth compression algorithm that replaces inefficient consecutive depthwise convolution and point-wise convolution inside the Inverted Residual Block with an efficient dense convolution Howard et al, 2017;Sandler et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…The above research on the basic theory of pruning provides a reference for the application of pruning. In the field of agriculture, many scholars have proposed different methods for different networks in order to achieve network slimming [27], including network pruning [28], knowledge distillation [29], and model quantification [30], among which network pruning is a simple and effective compression method [31], which can obtain a pruned model simply by trimming the number of channels and network layers.…”
Section: Introductionmentioning
confidence: 99%