2020
DOI: 10.48550/arxiv.2005.10451
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CPOT: Channel Pruning via Optimal Transport

Yucong Shen,
Li Shen,
Hao-Zhi Huang
et al.

Abstract: Recent advances in deep neural networks (DNNs) lead to tremendously growing network parameters, making the deployments of DNNs on platforms with limited resources extremely difficult. Therefore, various pruning methods have been developed to compress the deep network architectures and accelerate the inference process. Most of the existing channel pruning methods discard the less important filters according to well-designed filter ranking criteria. However, due to the limited interpretability of deep learning m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…The mainstream of conditional GAN compression [26], [27], [28], [29], [30], [31], [32], [33], [49] exploits KD, transferring the knowledge of teacher generator to student generator. Besides KD, several works [14], [16], [19] exploit network pruning. In addition, there exist several works [24], [25] developing evolutionary compression with inferior results compared with KD-based approaches.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The mainstream of conditional GAN compression [26], [27], [28], [29], [30], [31], [32], [33], [49] exploits KD, transferring the knowledge of teacher generator to student generator. Besides KD, several works [14], [16], [19] exploit network pruning. In addition, there exist several works [24], [25] developing evolutionary compression with inferior results compared with KD-based approaches.…”
Section: Related Workmentioning
confidence: 99%
“…To make GANs easy to be deployed on computational resource limited devices, extensive works have been proposed to obtain lightweight GANs. The mainstreaming approach is to inherit the model compression techniques developed for image-classification task to compress GANs, such as weight pruning [14], weight quantization [15], channel pruning [16], [17], [18], [19], lightweight GAN architecture search/design [20], [21], [22], [23], evolutionary compression [24], [25], and knowledge distillation (KD) [26], [27], [28], [29], [30], [31], [32], [33]. However, most of the above works focus on compressing conditional (cycle) GANs for image-to-image generation tasks, scarce works have been proposed for compressing vanilla GANs except recent works [8], [17], [18], [27], [34].…”
Section: Introductionmentioning
confidence: 99%
“…Informally, WB allows us to define for example, an average image when interpreted as a discrete probability distribution. Such flexibility, along side the geometric and statistical properties of WB has led to a large number of applications, e.g., image morphing and image interpolation of natural images [57], averaging atmospheric gas concentration data [6], graph representation learning [58], fairness in ML [13], geometric clustering [48], Bayesian learning [4], stain normalization and augmentation [49], probability and density forecast combination [15], multimedia analysis and fusion [36], unsupervised multilingual alignment [44], clustering patterns for COVID-19 dynamics [51], channel pruning [56], and many others.…”
Section: Introductionmentioning
confidence: 99%