2021
DOI: 10.48550/arxiv.2110.06635
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ADOP: Approximate Differentiable One-Pixel Point Rendering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
31
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(31 citation statements)
references
References 0 publications
0
31
0
Order By: Relevance
“…Recent advances in image-based rendering include learning-based disparity estimation [9,16], blending [14,39], and image synthesis via 2D CNNs [39,40]. Other commonly used scene representations include voxels [25,47], 3D point clouds [1,18,41] or cameracentric layered 3D representations, such as multi-plane images [3,29,53,57,64] or layered depth images [13,19,43,54].…”
Section: Related Workmentioning
confidence: 99%
“…Recent advances in image-based rendering include learning-based disparity estimation [9,16], blending [14,39], and image synthesis via 2D CNNs [39,40]. Other commonly used scene representations include voxels [25,47], 3D point clouds [1,18,41] or cameracentric layered 3D representations, such as multi-plane images [3,29,53,57,64] or layered depth images [13,19,43,54].…”
Section: Related Workmentioning
confidence: 99%
“…Multi-plane images [2,5,35,57,77] are good at modeling continuous shapes similar to volumetric representations, but are limited to a constrained set of viewing angles. Point cloud representations [1,20,34,50,66] can model various geometries with high fidelity. When used for appearance modeling, however, point-based representations might suffer from their innate sparseness which might result in holes.…”
Section: Related Workmentioning
confidence: 99%
“…They have achieved impressive results in 3D scene acquisition and rendering from multi-view photometric information. Compared to other geometric representations like multi-plane images [2,5,35,57,77] or point-based representations [1,20,34,50,66], volumetric representations support a larger range of camera motion for view extrapolation and do not suffer from holes when rendering dynamic geometry like point-based representations. Furthermore, they can be learned from multi-view RGB data using differentiable volumetric ray marching, without additional MVS methods.…”
Section: Introductionmentioning
confidence: 99%
“…Representing shape and appearance of scenes using neural networks has recently gained immense popularity. Explicit representation-based methods use differentiable rendering to learn 3D representation such as point clouds [1,41,58], meshes [51] or voxels [29,44] for the scene. Implicit representation-based methods represent scenes using continuous coordinatebased functions such as signed distance fields [2,8,17,24,60] or occupancy fields [30,35].…”
Section: Related Workmentioning
confidence: 99%