2022
DOI: 10.1145/3528223.3530122
|View full text |Cite
|
Sign up to set email alerts
|

Adop

Abstract: In this paper we present ADOP, a novel point-based, differentiable neural rendering pipeline. Like other neural renderers, our system takes as input calibrated camera images and a proxy geometry of the scene, in our case a point cloud. To generate a novel view, the point cloud is rasterized with learned feature vectors as colors and a deep neural network fills the remaining holes and shades each output pixel. The rasterizer renders points as one-pixel splats, which makes it very fast and allows us to compute g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 87 publications
(28 citation statements)
references
References 55 publications
0
28
0
Order By: Relevance
“…In Neural Point‐based Graphics [ASK*20a] and SynSin [WGSJ20], learnable features are attached to the points that can store rich information about the appearance and shape of the actual surface. In ADOP [RFS21a] these learnable features are interpreted by an MLP which can account for view‐dependent effects. Note that instead of storing explicitly features for specific points, one can also use an MLP to predict the features for the discrete positions.…”
Section: Fundamentals Of Neural Renderingmentioning
confidence: 99%
“…In Neural Point‐based Graphics [ASK*20a] and SynSin [WGSJ20], learnable features are attached to the points that can store rich information about the appearance and shape of the actual surface. In ADOP [RFS21a] these learnable features are interpreted by an MLP which can account for view‐dependent effects. Note that instead of storing explicitly features for specific points, one can also use an MLP to predict the features for the discrete positions.…”
Section: Fundamentals Of Neural Renderingmentioning
confidence: 99%
“…Thies et al [36] incorporate neural textures into traditional mesh rasterization pipeline and use a CNN-based neural renderer to enable high quality NVS. Another line of approaches [2,18,27,31,53] follow a similar pipeline but use point as the surface primitive and directly featurize the surface in 3D. Recently, 3D Gaussian Splatting [16] demonstrates great success in terms of rendering quality and efficiency with a highly flexible point-based representation.…”
Section: Related Workmentioning
confidence: 99%
“…Secondly, there lacks a suitable representation simultaneously possessing global robustness and local expressivity. Formulations that favor the former typically rely on an imperfect surface proxy reconstructed from multiview stereo (MVS) [10,15,32,46], and then featurize the proxy on either the parametrized 2D UV map [23,36,39] or the 3D surface [2,16,18,27,31,45,53]. While being global-structure-aware, such representations typically struggle to represent the local intricate details.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast to implicit rendering, point cloud rendering [1,6,13,18,33,36,59] is a promising editable rendering model. On the one hand, explicit 3D representations are better for interactive editing.…”
Section: Introductionmentioning
confidence: 99%