2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00810
|View full text |Cite
|
Sign up to set email alerts
|

Extracting Triangular 3D Models, Materials, and Lighting From Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
74
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 210 publications
(74 citation statements)
references
References 38 publications
0
74
0
Order By: Relevance
“…One of the key differences between our tetrahedral grid representation and that of DefTet [8] and DMTet [34,44] is how we predict and utilize displacements. In this work, our tetrahedral grid contains a vector-valued displacement to the underlying surface at each tetrahedron's centroid, which we refer to as a deformation field.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…One of the key differences between our tetrahedral grid representation and that of DefTet [8] and DMTet [34,44] is how we predict and utilize displacements. In this work, our tetrahedral grid contains a vector-valued displacement to the underlying surface at each tetrahedron's centroid, which we refer to as a deformation field.…”
Section: Related Workmentioning
confidence: 99%
“…While prior works apply the tetrahedral grid representation to the tasks of tetrahedral reconstruction (DefTet) and mesh super-resolution (DMTet), our method provides generative capabilities, a fundamentally different objective. TetGAN enables sampling novel shapes from noise, latent space interpolations, and shape editing, none of which are provided by DefTet/DMTet (nor by inverse rendering [34]). TetGAN achieves this using a novel CNN for tetrahedral meshes, with convolution/pooling blocks that are distinct from DefTet/DMTet components and tailored for the task of generation (inspired by 2D CNNs [23,24,31]).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This post processing flexibility is missing from prior works. More recently, Munkberg et al [38] combine neural and traditional representations within a differentiable rendering framework to recover a triangle mesh, an SVBRDF texture and an environment map, but their approach has only been demonstrated on isolated objects.…”
Section: Learning-based Materials Capturementioning
confidence: 99%
“…Light estimation is one of the sub-tasks of inverse rendering. Most previous works ignore spatially-varying effects and predict a single environment map for the whole scene [Gardner et al 2017;Munkberg et al 2022;Sengupta et al 2019]. Indoor scenes suffer from spatial variations, thus recent work explores spatially-varying lighting estimation for indoor scenes.…”
Section: Related Workmentioning
confidence: 99%