2020
DOI: 10.1007/978-3-030-58574-7_22
|View full text |Cite
|
Sign up to set email alerts
|

GANHopper: Multi-hop GAN for Unsupervised Image-to-Image Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(10 citation statements)
references
References 15 publications
0
10
0
Order By: Relevance
“…Neural approach (joint generation w/ refinement): On the iterative sample refinement, GANHopper is the closest to ours in spirit [19]. The key difference is that our system produces a structured model (as opposed to a raster image) and the training process is non-sequential.…”
Section: Related Workmentioning
confidence: 99%
“…Neural approach (joint generation w/ refinement): On the iterative sample refinement, GANHopper is the closest to ours in spirit [19]. The key difference is that our system produces a structured model (as opposed to a raster image) and the training process is non-sequential.…”
Section: Related Workmentioning
confidence: 99%
“…The main idea is that a source image is first translated into the target domain and then translated back to the source domain, and the distance between the source and reconstructed images should be consistent at pixel-level. It becomes a popular strategy to solve the unpaired image translation problem [4,5,6,7,8,9]. However, cycle consistency maintains image content globally, causing poor performance on contentrich images.…”
Section: Image Translation Based On Cycle Consistencymentioning
confidence: 99%
“…Recent approaches [4,5,6,7,8,9,10] exploit cycle consistency [11] to loose the constraint of paired data for training, thus achieving better results. They assume that the translated image could be translated back to the source domain, and the reconstructed image should retain the same content with the original image.…”
Section: Introductionmentioning
confidence: 99%
“…Alternatively, spatial attention was exploited to drive better the adversarial training on unrealistic regions [43]. Some methods focus instead on generating intermediate representations of source and target [22,44] or continuous translations [61,48]. In the recent [20], authors exploit similarity with retrieved images to increase translation quality.…”
Section: Image-to-image Translationmentioning
confidence: 99%