2021 International Conference on 3D Vision (3DV) 2021
DOI: 10.1109/3dv53792.2021.00104
|View full text |Cite
|
Sign up to set email alerts
|

FiG-NeRF: Figure-Ground Neural Radiance Fields for 3D Object Category Modelling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 44 publications
(19 citation statements)
references
References 31 publications
0
19
0
Order By: Relevance
“…from one or more images, with controllable shape, appearance, and environmental conditions, and the quality of the sampled images varies. Other approaches learn pose, shape, and texture from images for certain categories of objects [7,21], or interpolation, view synthesis, and segmentation of sampled category instances [42]. However, none of these approaches allow for the level of structure and material decomposition suitable for high-fidelity rendering and relighting.…”
Section: Related Workmentioning
confidence: 99%
“…from one or more images, with controllable shape, appearance, and environmental conditions, and the quality of the sampled images varies. Other approaches learn pose, shape, and texture from images for certain categories of objects [7,21], or interpolation, view synthesis, and segmentation of sampled category instances [42]. However, none of these approaches allow for the level of structure and material decomposition suitable for high-fidelity rendering and relighting.…”
Section: Related Workmentioning
confidence: 99%
“…This physics and machine learning framework has been highly effective and subsequent works have added additional physics-based priors such as reflectance models [88], [89], normal estimation [90], and shadow models [91] to enable better novel-view synthesis and 3D reconstruction. Moreover, these physics-based priors are now also used to train on classical computer vision tasks, such as object classification and segmentation, and show improved performance over purely data-driven techniques [92] [93]. Neural de-rendering has also been used for unsupervised representation learning, and has led to improved downstream accuracy over purely data-driven methods [94].…”
Section: Neural Renderingmentioning
confidence: 99%
“…As pixelNeRF leverages local image features to synthesis novel views, it is (in contrast to ours) trained on at least two views of the same instance. In the recent work FiG-NeRF [40], Xie et al introduce a 2-component, deformable neural radiance field for jointly modeling object categories and a foreground/background segmentation. In [30], Ost et al learn a scene graph to represent automotive data enabling novel views.…”
Section: Related Workmentioning
confidence: 99%