2021
DOI: 10.48550/arxiv.2104.05833
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalization

Abstract: Training deep networks with limited labeled data while achieving a strong generalization ability is key in the quest to reduce human annotation efforts. This is the goal of semi-supervised learning, which exploits more widely available unlabeled data to complement small labeled data sets. In this paper, we propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels. Concretely, we learn a generative adversarial network that captures the joint image-label di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 80 publications
0
7
0
Order By: Relevance
“…GANs and Latent Space Image Embedding. EditGAN builds on top of DatasetGAN [1] and SemanticGAN [2], which proposed to jointly model images and their semantic segmentations using shared latent codes. However, these works leveraged this model design only for semi-supervised learning, not for editing.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…GANs and Latent Space Image Embedding. EditGAN builds on top of DatasetGAN [1] and SemanticGAN [2], which proposed to jointly model images and their semantic segmentations using shared latent codes. However, these works leveraged this model design only for semi-supervised learning, not for editing.…”
Section: Related Workmentioning
confidence: 99%
“…3 High-Precision Semantic Image Editing with EditGAN Deep generative models such as StyleGAN2, which are trained to synthesize highly realistic images, acquire a semantic understanding of the modeled images in their high-dimensional feature space. Recently, DatasetGAN [1] and SemanticGAN [2] built on this insight to learn a joint distribution p(x, y) over images x and pixel-wise semantic segmentation labels y, while requiring only a handful of labeled examples. EditGAN utilizes this joint distribution p(x, y) to perform high-precision semantic image editing of real and synthesized images.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations