2022
DOI: 10.48550/arxiv.2201.04873
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting

Abstract: Figure 1. We propose VoLux-GAN, a 3D-aware generator that produces faces with full HDRI relighting capability. Here we show a comparison of images generated by VoLux-GAN and related work pi-GAN [11] (which does not support relighting) and ShadeGAN [43].

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 46 publications
0
3
0
Order By: Relevance
“…However, their estimated geometry is still too smooth and lacks the precision for conducting other stylization tasks such as relighting. Most recently, [Tan et al 2022] imposes additional lighting constraints to simultaneously achieve relighting and perspective changes. Yet, the final quality still falls short of the photographic level due to volumetric neural rendering which also causes cross-view flickering .…”
Section: Related Workmentioning
confidence: 99%
“…However, their estimated geometry is still too smooth and lacks the precision for conducting other stylization tasks such as relighting. Most recently, [Tan et al 2022] imposes additional lighting constraints to simultaneously achieve relighting and perspective changes. Yet, the final quality still falls short of the photographic level due to volumetric neural rendering which also causes cross-view flickering .…”
Section: Related Workmentioning
confidence: 99%
“…Other Face Relighting Methods mostly use carefully collected supervisory data from lightstages [41,57,28,36,30]. ShadeGAN [29] and Volux-GAN [42] uses a volumetric rendering approach to learn the underlying 3D structure of the face and the illumination encoding. Volux-GAN also requires image decomposition obtained from [30] that is trained using a carefully curated light-stage data.…”
Section: Related Workmentioning
confidence: 99%
“…People address these scaling issues of NeRF-based GANs in different ways, but the dominating approach is to train a separate 2D decoder to produce a high-resolution image from a low-resolution image or feature grid rendered from a NeRF backbone [43]. During the past six months, there appeared more than a dozen of methods which follow this paradigm (e.g., [6,15,71,47,79,35,75,23,72,78,64]). While using the upsampler allows to scale the model to high resolution, it comes with two severe limitations: 1) it breaks multi-view consistency of a generated object, i.e.…”
Section: Introductionmentioning
confidence: 99%