2018
DOI: 10.1007/s00371-018-1550-6
|View full text |Cite
|
Sign up to set email alerts
|

Point-based rendering enhancement via deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 30 publications
(20 citation statements)
references
References 34 publications
0
20
0
Order By: Relevance
“…While 3D modelling approaches solely based on spatial attributes can leverage both the At the point level (lowest level), the geometry is sparse and defines the lowest possible geometric description. While this is convenient for point-based rendering [108] that can be enhanced through deep learning [109], leading to simpler and more efficient algorithms [110], geometric clustering covers applications identified in Section 1. At the patch level, subsets of points are grouped to form small spatial conglomerates.…”
Section: Knowledge-base Structurationmentioning
confidence: 99%
“…While 3D modelling approaches solely based on spatial attributes can leverage both the At the point level (lowest level), the geometry is sparse and defines the lowest possible geometric description. While this is convenient for point-based rendering [108] that can be enhanced through deep learning [109], leading to simpler and more efficient algorithms [110], geometric clustering covers applications identified in Section 1. At the patch level, subsets of points are grouped to form small spatial conglomerates.…”
Section: Knowledge-base Structurationmentioning
confidence: 99%
“…We would contend that it is relatively low, given the significant computing power required to achieve ultra-high frame rate. Despite this, rapid progression in graphics technology (e.g., deep-learning supported 3D graphics processing; Bui et al, 2018;Nalbach et al, 2017) means the capability for high-resolution, high frame rate rendering will soon be widely available, such that even small enhancements in vection will be easily attainable. However, the lack of improvement in subjective vection for high FPS displays suggests that values of around 15-45 FPS, around the economical frame rates of our study and others (Fujii et al, 2018), are optimal for inducing compelling self-motion, at least for the impoverished stimuli evaluated in these experiments.…”
Section: Discussionmentioning
confidence: 99%
“…A neural network solution for upsampling and filling a splatted point cloud was published in [106]. Instead of using a rendering network to learn a mapping from a point cloud to final image, it used a GAN-based network to learn a post-processing step from an incomplete and low-frequency splatted rendering to a high quality final frame.…”
Section: Point-based Neural Renderingmentioning
confidence: 99%