2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00333
|View full text |Cite
|
Sign up to set email alerts
|

Co-Evolutionary Compression for Unpaired Image Translation

Abstract: Generative adversarial networks (GANs) have been successfully used for considerable computer vision tasks, especially the image-to-image translation. However, generators in these networks are of complicated architectures with large number of parameters and huge computational complexities. Existing methods are mainly designed for compressing and speeding-up deep neural networks in the classification task, and cannot be directly applied on GANs for image translation, due to their different objectives and trainin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

1
77
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 78 publications
(78 citation statements)
references
References 36 publications
1
77
0
Order By: Relevance
“…It can compress state-of-the-art conditional GANs by 5-21×, and reduce the model size by 4-33×, with only negligible degradation in the model performance. Specifically, our proposed method shows a clear advantage of CycleGAN compression compared to the previous Co-Evolution method [60]. We can reduce the computation of CycleGAN generator by 21.2×, which is 5× better compared to the previous CycleGANspecific method [60] while achieving a better FID by more than 30 ‡ .…”
Section: Quantitative Resultsmentioning
confidence: 85%
See 3 more Smart Citations
“…It can compress state-of-the-art conditional GANs by 5-21×, and reduce the model size by 4-33×, with only negligible degradation in the model performance. Specifically, our proposed method shows a clear advantage of CycleGAN compression compared to the previous Co-Evolution method [60]. We can reduce the computation of CycleGAN generator by 21.2×, which is 5× better compared to the previous CycleGANspecific method [60] while achieving a better FID by more than 30 ‡ .…”
Section: Quantitative Resultsmentioning
confidence: 85%
“…The FID difference between the two protocols is small. The FIDs for the original model, Shu et al [60], and our compressed model are 65.48, 96.15, and 69.54 using their protocol.…”
Section: Quantitative Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…To this end, we propose an endto-end compression framework based on CPD. Compared to Shu et al [32] and Li et al [33], we do not need to pretrain GAN model. We design and train the compression model from scratch.…”
mentioning
confidence: 99%