2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00783
|View full text |Cite
|
Sign up to set email alerts
|

Regularizing Generative Adversarial Networks under Limited Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
84
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 105 publications
(85 citation statements)
references
References 24 publications
1
84
0
Order By: Relevance
“…Deep generative models [12,17,29,34,41,45,46,53] have achieved lots of successes in image synthesis tasks. GAN based methods demonstrate amazing capability in yielding high-fidelity samples [4,17,27,44,53]. In contrast, likelihood-based methods, such as Variational Autoencoders (VAEs) [29,45], Diffusion Models [12,24,41] and Autoregressive Models [34,46], offer distribution coverage and hence can generate more diverse samples [41,45,46].…”
Section: Image Synthesismentioning
confidence: 99%
“…Deep generative models [12,17,29,34,41,45,46,53] have achieved lots of successes in image synthesis tasks. GAN based methods demonstrate amazing capability in yielding high-fidelity samples [4,17,27,44,53]. In contrast, likelihood-based methods, such as Variational Autoencoders (VAEs) [29,45], Diffusion Models [12,24,41] and Autoregressive Models [34,46], offer distribution coverage and hence can generate more diverse samples [41,45,46].…”
Section: Image Synthesismentioning
confidence: 99%
“…We provide comprehensive experiments (10% and 20% data) in §C of the supplementary material. We summarize our findings as: 1) the augmentation-based ADA [20] and DA [60] leak augmentation artifacts to the generator, while ADA+LCSA and DA+LCSA alleviate this issue, 2) LCSA harmonizes with ADA, DA and LeCam loss [47]. 3) we achieve the state of the art on this limited data setting.…”
Section: Results Of Image Generationmentioning
confidence: 75%
“…We build on BigGAN [5], Omni-GAN [63], MSG-StyleGAN [18], StyleGAN2 [22] but we equip the discriminator with the manifold learner which is metacontrolled to reduce overfitting that typically happens in the discriminator rather than the generator [52]. In §C of the supplementary material, we also investigate the combination of our method with DA [60], ADA [20] and LeCam-GAN [47] in limited data scenario.…”
Section: Problem Formulationmentioning
confidence: 99%
See 2 more Smart Citations