2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00552
|View full text |Cite
|
Sign up to set email alerts
|

Nested Scale-Editing for Conditional Image Synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 37 publications
0
3
0
Order By: Relevance
“…The extraction of semantic information also becomes rich. Zhang et al [38] proposed semantic persistence across scales by sharing common latent codes, and scale independence is achieved by nesting scale disentanglement losses. Scale-specific diversity is created by incorporating progressive diversity constraints.…”
Section: Vanilla Modelsmentioning
confidence: 99%
“…The extraction of semantic information also becomes rich. Zhang et al [38] proposed semantic persistence across scales by sharing common latent codes, and scale independence is achieved by nesting scale disentanglement losses. Scale-specific diversity is created by incorporating progressive diversity constraints.…”
Section: Vanilla Modelsmentioning
confidence: 99%
“…Much of the I2I research focuses on filling in missing pixels, i.e, image inpainting [14], [15], [16], [17], [18] and image outpainting [189], but they treat different occluded images. Taking the image of a human face as an example, the image inpainting task produces visually realistic and semantically correct results from the input with a masked nose, mouth and eyes, while the image outpainting task translates a highly occluded face image that only has a nose, mouth and eyes.…”
Section: Applicationmentioning
confidence: 99%
“…Generating high-resolution images that semantically consistent with various input types is a frontier but challenging task. It has tremendous practical applications, such as intelligent image editing [38], game generation [12], and face representation interpreting [29]. Recently, thanks to the generative adversarial networks (GANs), which have been at the helm of remarkable advances in image synthesis, GAN-based image generation methods [11,24,2] greatly drive research progress in multi-modal feature learning and visual distribution modeling.…”
mentioning
confidence: 99%