2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01911
|View full text |Cite
|
Sign up to set email alerts
|

SpaceEdit: Learning a Unified Editing Space for Open-Domain Image Color Editing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…where 1 represents the all-1 matrix that has the same shape as M. And the mask update strategy is shown in Equation (3).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…where 1 represents the all-1 matrix that has the same shape as M. And the mask update strategy is shown in Equation (3).…”
Section: Related Workmentioning
confidence: 99%
“…The goal of image inpainting is to make the repaired results closer to the ground truth in terms of evaluation metrics and visual effects. Image inpainting networks are widely used in many fields, such as image editing, unwanted object removal image super-resolution and image zooming, etc [1][2][3][4][5][6].…”
Section: Introductionmentioning
confidence: 99%
“…Pre-information GAN models Shi et al [56] proposed a unified model for open-domain image editing. Through the extraction and adjustment of color information in the image, the original structural information of the image is not lost.…”
Section: Image Fusion Gan Modelsmentioning
confidence: 99%
“…Although photorealistic images were harvested, fine-grained problems (artifacts and substandard textures) at high resolution were exposed. Around StyleGAN, different mapping methods [28,29,30,31] all yield highfidelity face images full of details and structural integrity; [32,33,34,35] try to make the editing of GAN controllable by dissecting the latent variables.…”
Section: Conditional Informationmentioning
confidence: 99%