2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.310
|View full text |Cite
|
Sign up to set email alerts
|

DualGAN: Unsupervised Dual Learning for Image-to-Image Translation

Abstract: Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently [7,8,21,12,4,18]. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation [23], we develop a novel dual-GAN mechanism, which enables image translators … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

2
1,339
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,954 publications
(1,412 citation statements)
references
References 23 publications
2
1,339
0
1
Order By: Relevance
“…The MMD term in our formulation only ensures that the two distributions agree globally in the latent space, whereas both JLMA and GUMA have a term that ensures, for each instance, that the local geometry is preserved between domains. This is the difference between manifold superimposing [18,17,8] and manifold alignment discussed in the MAGAN paper [1]. Furthermore, GUMA's assumption that individual cells can be matched 1-to-1 between the two input domains is not generally true, most obviously when n 1 = n 2 .…”
Section: Comparison Of These Three Algorithms With Our Algorithmmentioning
confidence: 99%
“…The MMD term in our formulation only ensures that the two distributions agree globally in the latent space, whereas both JLMA and GUMA have a term that ensures, for each instance, that the local geometry is preserved between domains. This is the difference between manifold superimposing [18,17,8] and manifold alignment discussed in the MAGAN paper [1]. Furthermore, GUMA's assumption that individual cells can be matched 1-to-1 between the two input domains is not generally true, most obviously when n 1 = n 2 .…”
Section: Comparison Of These Three Algorithms With Our Algorithmmentioning
confidence: 99%
“…However, this method relied on paired data for supervised learning. To avoid this prerequisite, CycleGAN, DiscoGAN, and DualGAN were designed following the cycle consistency for training using unpaired data. These series of methods have been proven effective in various tasks, such as collection style transfer, object transfiguration, season transfer, and generating photographs from sketches .…”
Section: Related Workmentioning
confidence: 99%
“…In particular, without paired training samples, the original GANs cannot guarantee that the output imitations contain the same semantic information as that of the input images. Cycle-GAN [14], DiscoGAN [15], DualGAN [16] proposed cycleconsistent adversarial networks to address the unpaired imageto-image translation problem. They simultaneously trained two pairs of generative networks and discriminative networks, one to produce imitative paintings and the other to transform the imitation back to the original photograph and pursue cycle consistency.…”
mentioning
confidence: 99%
“…Considering the wide application of style transfer on mobile devices, space-saving is an important algorithm design consideration. Methods of CycleGAN [14], DiscoGAN [15], DualGAN [16] could only transfer one style per network. In this work, we propose a gated transformer module to achieve multi-collection style transfer in a single network.…”
mentioning
confidence: 99%