2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00297
|View full text |Cite
|
Sign up to set email alerts
|

L2M-GAN: Learning to Manipulate Latent Space Semantics for Facial Attribute Editing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
23
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 53 publications
(23 citation statements)
references
References 29 publications
0
23
0
Order By: Relevance
“…In contrast, the second stream adopts an encoder for efficient image projection [9,11,[30][31][32][33][34][35][36][37][38][39], but due to the low-dimensional bottleneck layer (i.e., the innermost feature map with minimal spatial size), the encoder-decoder structure often faces the problem of inaccurate reconstruction [9,35,36]. Recent works often treat attribute manipulation as an image-to-image translation task [15,30,33,[35][36][37][38][39] and learn to synthesize an output image directly according to user input. Despite promising results, most image-to-image translations [15,[35][36][37][38][39][40] can only edit the predefined attributes before training, limiting their flexibility in the inference stage.…”
Section: Face Attribute Manipulationmentioning
confidence: 99%
See 4 more Smart Citations
“…In contrast, the second stream adopts an encoder for efficient image projection [9,11,[30][31][32][33][34][35][36][37][38][39], but due to the low-dimensional bottleneck layer (i.e., the innermost feature map with minimal spatial size), the encoder-decoder structure often faces the problem of inaccurate reconstruction [9,35,36]. Recent works often treat attribute manipulation as an image-to-image translation task [15,30,33,[35][36][37][38][39] and learn to synthesize an output image directly according to user input. Despite promising results, most image-to-image translations [15,[35][36][37][38][39][40] can only edit the predefined attributes before training, limiting their flexibility in the inference stage.…”
Section: Face Attribute Manipulationmentioning
confidence: 99%
“…Recent works often treat attribute manipulation as an image-to-image translation task [15,30,33,[35][36][37][38][39] and learn to synthesize an output image directly according to user input. Despite promising results, most image-to-image translations [15,[35][36][37][38][39][40] can only edit the predefined attributes before training, limiting their flexibility in the inference stage.…”
Section: Face Attribute Manipulationmentioning
confidence: 99%
See 3 more Smart Citations