2019
DOI: 10.48550/arxiv.1912.05270
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MineGAN: effective knowledge transfer from GANs to target domains with few images

Abstract: One of the attractive characteristics of deep neural networks is their ability to transfer knowledge obtained in one domain to other related domains. As a result, high-quality networks can be trained in domains with relatively little training data. This property has been extensively studied for discriminative networks but has received significantly less attention for generative models. Given the often enormous effort required to train GANs, both computationally as well as in the dataset collection, the re-use … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…• MineGAN [44]: To avoid overfitting of the generator, MineGAN suggests to fix the generator and modify the latent codes. To this end, MineGAN train a miner network that transforms the latent code to another latent code.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…• MineGAN [44]: To avoid overfitting of the generator, MineGAN suggests to fix the generator and modify the latent codes. To this end, MineGAN train a miner network that transforms the latent code to another latent code.…”
Section: Methodsmentioning
confidence: 99%
“…In our experiments, MineGAN totally fails to adapt to the target distribution. Note that MineGAN assumes the source distribution covers (or at least close to) the target distribution (e.g., adult faces to child faces as in the original paper [44]), but cannot be applied if the distributions have disjoint support (e.g., human faces to dog faces).…”
Section: B Comparison To Feature Distillationmentioning
confidence: 99%
See 2 more Smart Citations
“…They show that they can add a new class to a pre-trained generator without disturbing the performance of the original domain. Wang et al (2020) propose to use a miner network that identifies which distribution of multiple pre-trained GANs is the most beneficial for a specific target. This mining pushed the sampling towards more suitable regions in the latent space.…”
Section: Transfer Learningmentioning
confidence: 99%