2020
DOI: 10.1145/3414685.3417803
|View full text |Cite
|
Sign up to set email alerts
|

Pie

Abstract: portrait images in the latent space of StyleGAN, which allows for intuitive editing of the head pose, facial expression, and scene illumination in the image. Semantic editing in parameter space is achieved based on StyleRig, a pretrained neural network that maps the control space of a 3D morphable face model to the latent space of the GAN. We design a novel hierarchical non-linear optimization problem to obtain the embedding. An identity preservation energy term allows spatially coherent edits while maintainin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 123 publications
(7 citation statements)
references
References 40 publications
0
7
0
Order By: Relevance
“…Inversion of 2D GANs For StyleGAN, an important observation was made by the authors of [1] that operating in the extended W+ space is significantly more expressive than in the restrictive W generator input space. The latter idea has been strengthened and better adapted for face editing with the appearance of pSp [45] and e4e [53], as well as of their cascaded variant ReStyle [4] and other works [2,62,52].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Inversion of 2D GANs For StyleGAN, an important observation was made by the authors of [1] that operating in the extended W+ space is significantly more expressive than in the restrictive W generator input space. The latter idea has been strengthened and better adapted for face editing with the appearance of pSp [45] and e4e [53], as well as of their cascaded variant ReStyle [4] and other works [2,62,52].…”
Section: Related Workmentioning
confidence: 99%
“…applying 2D GAN inversion techniques. An existing branch of research studies 2D GAN inversion in high detail [1,45,4,2,62,52], but nevertheless, the problem remains underexplored in 3D.…”
Section: Introductionmentioning
confidence: 99%
“…Controllable Face Image Synthesis. Considerable work [9,10,14,27,41,52,53] has been devoted to incorporate 3D priors of statistical face models, such as 3D Morphable Models (3DMMs) [6,40], in controllable face synthesis and animation. Among them, DiscoFace-GAN [10] proposed imitative-contrastive learning to mimic the 3DMM rendering process by the generative model.…”
Section: Related Workmentioning
confidence: 99%
“…Lighting and shadow manipulation methods [ZHSJ19,SKCJ18,SBT*19, ZBT*20,HZS*21,RTD*21,PEL*21,WYL*20,RGB*20] that adjust skin color can be further used to generate a stable albedo texture map. Facial attribute editing methods [TEB*20,TER*20,GGU*20, YFD*21, LZG*21] are comprehensive tools that can change pose and lighting. In addition, these methods can restore the input facial expression to a neutral facial expression state.…”
Section: Related Workmentioning
confidence: 99%