2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00579
|View full text |Cite
|
Sign up to set email alerts
|

Conditional Image-to-Image Translation

Abstract: Image-to-image translation tasks have been widely investigated with Generative Adversarial Networks (GANs) and dual learning. However, existing models lack the ability to control the translated results in the target domain and their results usually lack of diversity in the sense that a fixed image usually leads to (almost) deterministic translation result. In this paper, we study a new problem, conditional image-to-image translation, which is to translate an image from the source domain to the target domain co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
106
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 145 publications
(106 citation statements)
references
References 21 publications
0
106
0
Order By: Relevance
“…To generate controllable translation result, Lin et al [7] decompose the image latent space into domain-independent and domain-specific feature spaces, and raise a new problem named as conditional cross-domain translation which can assign domain-specific feature for generated result by feeding a conditional image in the target domain. Similar to [7], other two works [8], [9] proposed to disentangle latent space and generate diverse translation results. Choi et al [10] further proposed a StarGAN that can perform image-to-image translations for multiple domains using only a single model.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…To generate controllable translation result, Lin et al [7] decompose the image latent space into domain-independent and domain-specific feature spaces, and raise a new problem named as conditional cross-domain translation which can assign domain-specific feature for generated result by feeding a conditional image in the target domain. Similar to [7], other two works [8], [9] proposed to disentangle latent space and generate diverse translation results. Choi et al [10] further proposed a StarGAN that can perform image-to-image translations for multiple domains using only a single model.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, we can leverage such a pre-trained CNN to extract the domain-specific features of an image. Compared with works [7], [8], [9] that use two separated domain-specific feature extractors for two domain translation, we utilize the domain classifier as a general domainspecific feature extractor which can be easily generalized to multi-domain translation. With the well-defined domainspecific features, the domain-independent features can be easily obtained by feature disentanglement.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Unpaired translations Various works extended this idea to the case where no explicit input-output image pairs are available (unpaired image translation), using the idea of cyclic consistency [31,72,79,41] or consistency between certain extracted features [63]. To avoid accidental artifacts and improve learning, Mejjati et al [48] integrate an attention mechanism to help translations focus on semantically meaningful regions.…”
Section: Paired Translationsmentioning
confidence: 99%