2017
DOI: 10.48550/arxiv.1711.05139
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
48
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(48 citation statements)
references
References 0 publications
0
48
0
Order By: Relevance
“…Image-to-image translation (I2I) [30,34,43,49,51,53,57,59,60] aims to transfer images from source to target domain with the content information preserved. Earlier methods [51,23,5,41] apply an adversarial loss [14], along with a reconstruction loss to train their model based on the paired training data.…”
Section: Image-to-image Translationmentioning
confidence: 99%
“…Image-to-image translation (I2I) [30,34,43,49,51,53,57,59,60] aims to transfer images from source to target domain with the content information preserved. Earlier methods [51,23,5,41] apply an adversarial loss [14], along with a reconstruction loss to train their model based on the paired training data.…”
Section: Image-to-image Translationmentioning
confidence: 99%
“…DiscoGAN [22] explores a few forms of cycle-consistency loss. XGAN [23] applies semantic consistency to embedded features. While CycleGAN achieves approximately deterministic mappings between domains, Augmented CycleGAN [24] extends the idea of CycleGAN by introducing stochastic many-to-many mappings.…”
Section: B Two-way Adversarial Networkmentioning
confidence: 99%
“…This approach has a strong assumption that different domains should share the same low-dimensional representation in the network. XGAN [30] shares similar structure with UNIT [21] and it introduced the semantic consistency component in feature-level contrary to previous work of using pixel-level consistency. With a single auto-encoder to learn a common representation of different domains, DTN [33,36] transformed images in domains.…”
Section: Image-to-image Translationmentioning
confidence: 99%