2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00290
|View full text |Cite
|
Sign up to set email alerts
|

Towards Optimal Structured CNN Pruning via Generative Adversarial Learning

Abstract: Structured pruning of filters or neurons has received increased focus for compressing convolutional neural networks. Most existing methods rely on multi-stage optimizations in a layer-wise manner for iteratively pruning and retraining which may not be optimal and may be computation intensive. Besides, these methods are designed for pruning a specific structure, such as filter or block structures without jointly pruning heterogeneous structures. In this paper, we propose an effective structured pruning approach… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
288
1
3

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 470 publications
(295 citation statements)
references
References 40 publications
3
288
1
3
Order By: Relevance
“…Liu et al [37] used scaling factor γ in the normalization layer to impose sparseness constraints, measure the importance of channels during the training process, filter out channels with low scores and prune layer-by-layer. Huang et al [38] and Lin et al [39] proposed a sparse regularization mask method based on channel pruning; the mask is optimized via data-driven selection or generative adversarial learning. Zhao et al [40] further developed the norm-based importance estimation by taking the dependency between the adjacent layers into consideration and propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.…”
Section: Pruning Methodsmentioning
confidence: 99%
“…Liu et al [37] used scaling factor γ in the normalization layer to impose sparseness constraints, measure the importance of channels during the training process, filter out channels with low scores and prune layer-by-layer. Huang et al [38] and Lin et al [39] proposed a sparse regularization mask method based on channel pruning; the mask is optimized via data-driven selection or generative adversarial learning. Zhao et al [40] further developed the norm-based importance estimation by taking the dependency between the adjacent layers into consideration and propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.…”
Section: Pruning Methodsmentioning
confidence: 99%
“…However, this assumption is not always satisfied. New approaches also appear based on neural architecture search (NAS) [36], [37] and generative models [38], which require too heavy costs to be useful in practical applications.…”
Section: Related Workmentioning
confidence: 99%
“…So, when applying a general pruning technique to VGG16, the performance degradation is negligible in general. Fine-tuning may result in a higher performance of the pruned network Li et al [20] VID [34] GAL [38] than a given pretrained network. In fact, we can observe such a phenomenon in GAL [38] and GBN [31].…”
Section: Pruning Performance Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…These methods can minimize the local reconstruction error, but may increase the global reconstruction error. In addition, generative adversarial learning [41] and meta-learning [42] have also been applied to improve the pruning algorithm and reduce computational complexity. However, how to adaptively and efficiently select important filters is still a challenge.…”
Section: Introductionmentioning
confidence: 99%