2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00293
|View full text |Cite
|
Sign up to set email alerts
|

One-Shot GAN: Learning to Generate Samples from Single Images and Videos

Abstract: Training video & generated samples from a single video Training image & generated samples from a single image Figure 1: One-Shot GAN images generated from a single video or a single image. Our model successfully operates in different one-shot settings, including learning from a single video (first two rows) or a single image (last three rows), generating new scene compositions with varying content and layout. For example, from the single training video with a car on the road, One-Shot GAN generates images with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 37 publications
(29 citation statements)
references
References 36 publications
0
27
0
Order By: Relevance
“…The impressive performance of Sin-GAN proves the feasibility of internal learning on the generation task. From then on, many SIG models (Hinz et al 2021;Chen et al 2021;Zhang, Han, and Guo 2021;Sushko, Gall, and Khoreva 2021;Granot et al 2021) are proposed to explore along this line.…”
Section: Random Synthesesmentioning
confidence: 99%
See 1 more Smart Citation
“…The impressive performance of Sin-GAN proves the feasibility of internal learning on the generation task. From then on, many SIG models (Hinz et al 2021;Chen et al 2021;Zhang, Han, and Guo 2021;Sushko, Gall, and Khoreva 2021;Granot et al 2021) are proposed to explore along this line.…”
Section: Random Synthesesmentioning
confidence: 99%
“…MO-GAN (Chen et al 2021) follows SinGAN but synthesizes the hand-marked regions of interest and the rest of the image separately, and then merges them into an unbroken image. One-shot GAN (Sushko, Gall, and Khoreva 2021) is an endto-end model with multiple discriminators for learning different features of image, but it is not fully convolutional and more like normal GANs with diversity regularization (Yang et al 2019). ExSinGAN (Zhang, Han, and Guo 2021) introduces GAN inversion (Pan et al 2020) and perceptual loss (Johnson, Alahi, and Fei-Fei 2016) into SinGAN to improve performance on non-texture images.…”
Section: Related Workmentioning
confidence: 99%
“…Based on SinGAN, ConSinGAN [15] proposes a technique to control the trade-off between fidelity and diversity of generated samples. One-Shot GAN [34] uses a dual-branch discriminator where each head respectively identifies real context and real layout of the generated sample. As one-shot image generation methods focus on exploiting a single image, they are not directly applicable to few-shot image generation tasks where the generator must learn the underlying distribution of a collection of images.…”
Section: Related Workmentioning
confidence: 99%
“…Synthetic data generation is also termed data oversampling. Using generative adversarial networks (GANs) as synthetic oversamplers has been a voguish research endeavour for low data regimes [3], [8]. Various researchers have demonstrated that GANs are more effective as compared to other synthetic oversamplers like SMOTE [2], [7], [9], [10].…”
Section: Introductionmentioning
confidence: 99%