2021
DOI: 10.48550/arxiv.2104.06820
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Few-shot Image Generation via Cross-domain Correspondence

Abstract: Training imagesResulting generator GAN adaptation …to paintings …to babiesFigure 1: Given a model trained on a large source dataset (G s ), we propose to adapt it to arbitrary image domains, so that the resulting model (G s→t ) captures these target distributions using extremely few training samples. In the process, our method discovers a one-to-one relation between the distributions, where noise vectors map to corresponding images in the source and target. Consequently, one can imagine how a natural face woul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(15 citation statements)
references
References 32 publications
0
15
0
Order By: Relevance
“…The transfer-learning works are able to train with extremely few images, as few as none [16]. They typically do so by regularizing training, either with a dedicated loss [30,34] or by restricting which weights are trained [16,33,40]. We note that such works obtain great results on "artistic" domains, such as caricatures.…”
Section: Few-shot Generative Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…The transfer-learning works are able to train with extremely few images, as few as none [16]. They typically do so by regularizing training, either with a dedicated loss [30,34] or by restricting which weights are trained [16,33,40]. We note that such works obtain great results on "artistic" domains, such as caricatures.…”
Section: Few-shot Generative Modelsmentioning
confidence: 99%
“…The subsets are of sizes 10, 50, 100 and 200. For each subset, we estimate the diversity by computing the average pairwise LPIPS [34]. Additionally, we tune G p and invert 20 test images following the projection protocol in Section 3.3.…”
Section: Effect Of Dataset Size and Diversitymentioning
confidence: 99%
See 1 more Smart Citation
“…Fine-tuning and Catastrophic Forgetting: Fine-tuning was proven advantageous across fields, settings and tasks and therefore became a standard practice in the deep learning literature. Prominent advantages of fine-tuning are enabling few-shot tasks such as classification and unconditional generation (Wang et al, 2018b;Mo et al, 2020;Wang et al, 2020;Ojha et al, 2021), improved performance in a wide variety of tasks (Devlin et al, 2018;Radford et al, 2018;He et al, 2020) and faster training convergence (Wang et al, 2018b;.…”
Section: Related Workmentioning
confidence: 99%
“…We next experiment with even farther domains, with barely any similarity between parent and child, such as human faces and churches, which were also examined by Ojha et al (2021). Despite lack of commonality, the latent direction that controls face pose in the parent still controls the church pose in the child model (see Figure 14).…”
Section: Analysis Of Aligned Stylegan Modelsmentioning
confidence: 99%