2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.155
|View full text |Cite
|
Sign up to set email alerts
|

Channel Pruning for Accelerating Very Deep Neural Networks

Abstract: In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

10
1,617
1
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 2,232 publications
(1,729 citation statements)
references
References 56 publications
10
1,617
1
1
Order By: Relevance
“…They visualize the feature maps extracted by different filters and view each filter as a visual unit focusing on different visual components.of the ResNet-50 [28], and meanwhile save more than 75% of parameters and 50% computational time. In the literature, approaches for compressing the deep networks can be classified into five categories: parameter pruning [26,29,30,31], parameter quantizing [32,33,34,35,36,37,38,39,40,41], low-rank parameter factorization [42,43,44,45,46], transferred/compact convolutional filters [47,48,49,50], and knowledge distillation [51,52,53,54,55,56]. The parameter pruning and quantizing mainly focus on eliminating the redundancy in the model parameters respectively by removing the redundant/uncritical ones or compressing the parameter space (e.g.…”
mentioning
confidence: 99%
“…They visualize the feature maps extracted by different filters and view each filter as a visual unit focusing on different visual components.of the ResNet-50 [28], and meanwhile save more than 75% of parameters and 50% computational time. In the literature, approaches for compressing the deep networks can be classified into five categories: parameter pruning [26,29,30,31], parameter quantizing [32,33,34,35,36,37,38,39,40,41], low-rank parameter factorization [42,43,44,45,46], transferred/compact convolutional filters [47,48,49,50], and knowledge distillation [51,52,53,54,55,56]. The parameter pruning and quantizing mainly focus on eliminating the redundancy in the model parameters respectively by removing the redundant/uncritical ones or compressing the parameter space (e.g.…”
mentioning
confidence: 99%
“…iii) Knowledge distillation with mask guided [7]. iv) Channel pruning with LASSO-based channel selection [4,23]. Fig.…”
Section: Performance Comparisonmentioning
confidence: 99%
“…the most crucial strategy. As for the network compression problem, the typical solutions are designed to slim [27,37] the network directly, or quantify their parameters distributions [14,16,25], and filter the redundant layer dimensions [9,21]. In contrast to these techniques which aim at directly compressing the network while preserving its performance as much as possible, an alternative solution is to preset a smaller target network as the student, and employ the knowledge from the larger network as teacher to improve student's performance.…”
Section: Sparse Recodingmentioning
confidence: 99%