2020
DOI: 10.1007/978-3-030-58548-8_4
|View full text |Cite
|
Sign up to set email alerts
|

GAN Slimming: All-in-One GAN Compression by a Unified Optimization Framework

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
62
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 74 publications
(62 citation statements)
references
References 38 publications
0
62
0
Order By: Relevance
“…The mainstream of conditional GAN compression [26], [27], [28], [29], [30], [31], [32], [33], [49] exploits KD, transferring the knowledge of teacher generator to student generator. Besides KD, several works [14], [16], [19] exploit network pruning. In addition, there exist several works [24], [25] developing evolutionary compression with inferior results compared with KD-based approaches.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The mainstream of conditional GAN compression [26], [27], [28], [29], [30], [31], [32], [33], [49] exploits KD, transferring the knowledge of teacher generator to student generator. Besides KD, several works [14], [16], [19] exploit network pruning. In addition, there exist several works [24], [25] developing evolutionary compression with inferior results compared with KD-based approaches.…”
Section: Related Workmentioning
confidence: 99%
“…To make GANs easy to be deployed on computational resource limited devices, extensive works have been proposed to obtain lightweight GANs. The mainstreaming approach is to inherit the model compression techniques developed for image-classification task to compress GANs, such as weight pruning [14], weight quantization [15], channel pruning [16], [17], [18], [19], lightweight GAN architecture search/design [20], [21], [22], [23], evolutionary compression [24], [25], and knowledge distillation (KD) [26], [27], [28], [29], [30], [31], [32], [33]. However, most of the above works focus on compressing conditional (cycle) GANs for image-to-image generation tasks, scarce works have been proposed for compressing vanilla GANs except recent works [8], [17], [18], [27], [34].…”
Section: Introductionmentioning
confidence: 99%
“…Although it is theoretically possible to employ this method for low bitwidth inference in generative models, the non-uniform scheme utilized complicates deployment of this framework on edge devices. Perhaps, the most relevant method was described in [26]. The authors applied uniform quantization to both weights and activations, combining quantization with pruning and knowledge distillation in a unified optimization framework.…”
Section: Quantization Of Generative Adversarial Networkmentioning
confidence: 99%
“…Hence, we make use of quantization-aware training to further reduce the quality degradation associated with quantization. Similarly to [26], we employ a sum of adversarial and reconstruction losses for quantization-aware training:…”
Section: Quantization-aware Trainingmentioning
confidence: 99%
“…However, they fell into the trap of manual pre-defined search space and huge search costs. The prune-based methods [38,31,57,61] directly pruned a lightweight generator architecture from the original generator architecture. However, these works failed to take discriminator pruning into account, which would seriously destroy the Nash equilibrium between the generator and the discriminator.…”
Section: Related Workmentioning
confidence: 99%