2020 International Conference on 3D Vision (3DV) 2020
DOI: 10.1109/3dv50981.2020.00055
|View full text |Cite
|
Sign up to set email alerts
|

Learning Implicit Surface Light Fields

Abstract: Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reasoning about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 38 publications
(15 citation statements)
references
References 41 publications
0
15
0
Order By: Relevance
“…Datasets. We evaluate our method on two public datasets: Photoshapes [28,36] with 150K chairs rendered at 128×128 following the rendering protocol of [25] and Carla with 10K cars rendered at 256×256 using the Driving simulator [6,36]. Each object is rendered in a random view without providing any camera pose parameters.…”
Section: Methodsmentioning
confidence: 99%
“…Datasets. We evaluate our method on two public datasets: Photoshapes [28,36] with 150K chairs rendered at 128×128 following the rendering protocol of [25] and Carla with 10K cars rendered at 256×256 using the Driving simulator [6,36]. Each object is rendered in a random view without providing any camera pose parameters.…”
Section: Methodsmentioning
confidence: 99%
“…Neural Implicit Representations: Recently, neural implicit functions have emerged as an effective representation of 3D geometry [3,8,12,32,33,37,41,43,47,57] and appearance [23,25,34,38,39,40,47,49,52] as they represent 3D content continuously and without discretization while simultaneously having a small memory footprint. Most of these methods require 3D supervision.…”
Section: Related Workmentioning
confidence: 99%
“…Many recent methods employ continuous implicit functions parameterized with neural networks as 3D representations of geometry [3,8,12,32,33,37,41,43,47,57] or appearance [34,38,39,40,47,52,61]. These neural 3D representations have shown impressive performance on geometry reconstruction and novel view synthesis from multi-view images.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Coordinate-based representations have emerged as a powerful tool for overcoming the limitations of traditional discrete representations (e.g., images, meshes, voxelized volumes). The core idea is to train an MLP to map an input coordinate to the desired target value such as pixel color [26,32,45,60], signed distance [5,33], occupancy [28], volume density [30], or semantic labels [63]. Like existing coordinate-based representation approaches, our method also learns the mapping from an input coordinate (ray) to a target scene property (color).…”
Section: Related Workmentioning
confidence: 99%