2021 IEEE International Conference on Image Processing (ICIP) 2021
DOI: 10.1109/icip42928.2021.9506060
|View full text |Cite
|
Sign up to set email alerts
|

Learning Non-Linear Disentangled Editing For Stylegan

Abstract: EyeglassesGray Hair Age Hairline Original Slender Smiling Wavy Hair Makeup Sequential disentangled attribute manipulation. We show in this example how to achieve realistic, controllable, disentangled face editing. From the original image (center), we propose two opposite editing directions where only one attribute is manipulated at a time. To the right: 'slender', 'smiling', 'wavy hair' and 'makeup' and to the left: 'receding hairline', 'age', 'gray hair' and 'eyeglasses'. All results are obtained at resolutio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…Because a distortion‐perception trade‐off exists when inverting an image to w + latent space [TAN*21], a method that avoids this trade‐off can reduce the dependency on the quality of inversion. For example, estimating the deep features directly during inversion, as proposed in the Feature‐Style Encoder [YNGH22], and fusing them with the generated features could be a possible solution. We leave this as an intriguing direction for future research.…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…Because a distortion‐perception trade‐off exists when inverting an image to w + latent space [TAN*21], a method that avoids this trade‐off can reduce the dependency on the quality of inversion. For example, estimating the deep features directly during inversion, as proposed in the Feature‐Style Encoder [YNGH22], and fusing them with the generated features could be a possible solution. We leave this as an intriguing direction for future research.…”
Section: Limitations and Future Workmentioning
confidence: 99%