2022
DOI: 10.3390/s22218540
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Image-to-Image Translation: A Review

Abstract: Supervised image-to-image translation has been proven to generate realistic images with sharp details and to have good quantitative performance. Such methods are trained on a paired dataset, where an image from the source domain already has a corresponding translated image in the target domain. However, this paired dataset requirement imposes a huge practical constraint, requires domain knowledge or is even impossible to obtain in certain cases. Due to these problems, unsupervised image-to-image translation ha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(5 citation statements)
references
References 108 publications
0
5
0
Order By: Relevance
“…However, our experience indicates that training CycleGAN is not always easy to converge, especially when the image resolution is coarse with a complex background. Nevertheless, it is possible to replace CycleGAN with other SOTA domain adaptation algorithms [ 48 ].…”
Section: Discussionmentioning
confidence: 99%
“…However, our experience indicates that training CycleGAN is not always easy to converge, especially when the image resolution is coarse with a complex background. Nevertheless, it is possible to replace CycleGAN with other SOTA domain adaptation algorithms [ 48 ].…”
Section: Discussionmentioning
confidence: 99%
“…Network Architecture: For the asymmetric image transformation, the concept of cycle-consistent loss takes precedence (e.g., CycleGAN, [25] UNIT, MUNIT, and MuGAN [48][49][50] ). These approaches involved learning bidirectional mapping between the input and target domains.…”
Section: Methodsmentioning
confidence: 99%
“…CycleGAN introduces adversarial and cycle consistency losses to maintain image set characteristics across domains as visualized in Figure 2. Unlike Pix2Pix, it uses an autoencoder structure, lacks skip connections, and doesn't employ a conditional GAN [6], [9], [12].…”
Section: Relevant Workmentioning
confidence: 99%
“…These GANs can be used for various goals. These goals entail domain adaptation, which transforms styles from an image to a different domain style, or style transfer, which keeps most of the input content while changing its style aspects [5], [8], [9]. However, current I2I GAN models still pose problems of mode collapse, instability, and lack of diversity [1].…”
Section: Introductionmentioning
confidence: 99%