2020
DOI: 10.48550/arxiv.2002.10964
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs

Sangwoo Mo,
Minsu Cho,
Jinwoo Shin

Abstract: Generative adversarial networks (GANs) have shown outstanding performance on a wide range of problems in computer vision, graphics, and machine learning, but often require numerous training data and heavy computational resources. To tackle this issue, several methods introduce a transfer learning technique in GAN training. They, however, are either prone to overfitting or limited to learning small distribution shifts. In this paper, we show that simple fine-tuning of GANs with frozen lower layers of the discri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
107
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 52 publications
(108 citation statements)
references
References 26 publications
1
107
0
Order By: Relevance
“…However, as discussed above, the transferring difficulty increases drastically when given only one training sample. Existing methods attempt to address this issue by reducing the number of learnable parameters (Mo et al, 2020;Robb et al, 2020) and introducing training regularizers (Ojha et al, 2021). Even so, the overall fine-tuning scheme (i.e., directly tuning G(•) and D(•)) remains and the diversity is low.…”
Section: One-shot Generative Domain Adaptationmentioning
confidence: 99%
See 2 more Smart Citations
“…However, as discussed above, the transferring difficulty increases drastically when given only one training sample. Existing methods attempt to address this issue by reducing the number of learnable parameters (Mo et al, 2020;Robb et al, 2020) and introducing training regularizers (Ojha et al, 2021). Even so, the overall fine-tuning scheme (i.e., directly tuning G(•) and D(•)) remains and the diversity is low.…”
Section: One-shot Generative Domain Adaptationmentioning
confidence: 99%
“…Domain adaptation is a commonly used technique that can apply an algorithm developed on one data domain to another (Csurka, 2017). Prior works (Wang et al, 2018;Noguchi & Harada, 2019;Wang et al, 2020;Mo et al, 2020;Zhao et al, 2020a;Li et al, 2020;Robb et al, 2020) have introduced this technique to GAN training to alleviate the tough requirement on the data scale. Typically, they first train a large-scale model in the source domain with adequate data, and then transfer it to the target domain with only a few samples.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…GANs are capable of image generation in two categories: low-resolution [4,8,12,13,15,16,23] and high-resolution [14,17,[31][32][33][34][35][36][37][38][39][40][41]. A summary of these approaches is presented in Table 1.…”
Section: Related Workmentioning
confidence: 99%
“…Transfer learning on generative models for limited data has been the subject of study for the last three years [33,34,[38][39][40][41], focusing on evaluating the impact of freezing the lower generator layers [33,34], the lower discriminator layers [39], and both the generator and discriminator lower layers [40], using mainly general purposes datasets of indoors (e.g., LSUN, Bedroons) and faces (e.g., CelebHQ, FFHQ, CelebA). The results show a reduction in the overfitting derived from the knowledge transfer and training time.…”
Section: Related Workmentioning
confidence: 99%