2022
DOI: 10.1007/978-3-031-19781-9_35
|View full text |Cite
|
Sign up to set email alerts
|

Relighting4D: Neural Relightable Human from Videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(6 citation statements)
references
References 57 publications
0
6
0
Order By: Relevance
“…Modeling surface materials and environment maps is already well‐explored for static scenes [ZXY*23, JLX*23b, ZSD*21, SDZ*21, BBJ*21], albeit the use of data priors is minimal, and most of the methods rely on a sufficiently large number of views to perform a per‐scene reconstruction. For non‐rigid scenes, methods that do intrinsic decomposition are based on human‐specific templates [ICN*23, CL22, ZYW*23, BLS*21, LMM*22, TAL*07, WSVT13, LWS*13]. A few early approaches exist for general non‐rigid scenes [WZN*14,GLD*19] but investigation with the new neural implicit representation paradigms is still missing.…”
Section: Remaining Challenges and Discussionmentioning
confidence: 99%
“…Modeling surface materials and environment maps is already well‐explored for static scenes [ZXY*23, JLX*23b, ZSD*21, SDZ*21, BBJ*21], albeit the use of data priors is minimal, and most of the methods rely on a sufficiently large number of views to perform a per‐scene reconstruction. For non‐rigid scenes, methods that do intrinsic decomposition are based on human‐specific templates [ICN*23, CL22, ZYW*23, BLS*21, LMM*22, TAL*07, WSVT13, LWS*13]. A few early approaches exist for general non‐rigid scenes [WZN*14,GLD*19] but investigation with the new neural implicit representation paradigms is still missing.…”
Section: Remaining Challenges and Discussionmentioning
confidence: 99%
“…We measure the MSE and SSIM metrics on the right, left, and two-hand sequences. The result shows that our method significantly outperforms the physically-based rendering baseline based on Relighting4D [6].…”
Section: Runtime Analysismentioning
confidence: 92%
“…Compared to eyes, hands exhibit significantly more diverse pose variations, making explicit visibility incorporation essential. Relight-ing4D [6] learns relightable materials of an articulated human under a single unknown illumination, but the fidelity of relighting is limited bu the expressiveness of their parametric BRDF model. In contrast to these methods, our approach enables relighting of articulate hand models that can be animated with a wide range of poses.…”
Section: Model-based Human Relightingmentioning
confidence: 99%
See 1 more Smart Citation
“…Invertible Neural Networks (INNs) (Dinh, Krueger, and Bengio 2015;Dinh, Sohl-Dickstein, and Bengio 2017;Behrmann et al 2019;Chen et al 2018;Kingma and Dhariwal 2018) are are capable of performing invertible transformations between the input and output space. They are widely used in generative models like Normalizing Flows (Kobyzev, Prince, and Brubaker 2020)…”
Section: Invertible Neural Networkmentioning
confidence: 99%