2019
DOI: 10.48550/arxiv.1906.08240
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Point-Based Graphics

Abstract: as well as standard RGB cameras even in the presence of objects that are challenging for standard mesh-based modeling.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
85
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(86 citation statements)
references
References 29 publications
0
85
0
Order By: Relevance
“…The recent progress of differentiable neural rendering brings huge potential for 3D scene modeling and photorealistic novel view synthesis. Researchers explore various data representations to pursue better performance and characteristics, such as point-clouds [2,58,64], voxels [31], texture meshes [27,60] or implicit functions [7,33,34,36,43,63]. However, these methods…”
Section: Related Workmentioning
confidence: 99%
“…The recent progress of differentiable neural rendering brings huge potential for 3D scene modeling and photorealistic novel view synthesis. Researchers explore various data representations to pursue better performance and characteristics, such as point-clouds [2,58,64], voxels [31], texture meshes [27,60] or implicit functions [7,33,34,36,43,63]. However, these methods…”
Section: Related Workmentioning
confidence: 99%
“…However, their work does not target realtime animation or dynamics, and the usage of a heavy U-Net for rendering the final result is not possible in our setting. Aliev et al [2] proposes neural point-based graphics, in which the geometry is represented as a point cloud. Each point is associated with a deep feature, and a neural net computes pixel values based on splatted feature points.…”
Section: Neural Renderingmentioning
confidence: 99%
“…To solve this issue and scale the rendering to the number of persons in the VR telepresence, we should compute only the visible pixels, thus upper bounding the computation by the number of pixels of the display. Recent works in neural rendering such as the defferred neural rendering [24], the neural point-based graphics [2], the implicit differentiable rendering [27], use neural network to compute pixel values in the screen space instead of the texture space thus computing only visible pixels. However, in all these works, either a static scene is assumed, or the viewing distance and perspective are not expected to be entirely free in the 3D space.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The 3D representations are learned from 2D images via differentiable rendering networks. Convolutional neural networks are used to predict volumetric representations via 3D voxel-grid features [40,25,31,27,16,17], point clouds [1,49], textured meshes [20,23,44] and multi-plane images [11,55]. The learnt representations are projected by a 3D-to-2D operation to synthesize images.…”
Section: Related Workmentioning
confidence: 99%