SIGGRAPH Asia 2019 Posters 2019
DOI: 10.1145/3355056.3364554
|View full text |Cite
|
Sign up to set email alerts
|

Rendering Point Clouds with Compute Shaders

Abstract: a) Regular OpenGL z-fighting (b) 40 bit integer depth buffer (c) Regular Rasterization (d) High-Quality Splatting Figure 1: Point-based rendering via compute shaders. (b) Higher depth-precision. (c) Up to ten times faster than OpenGL rasterizer with regular quality. (d) Two to three times faster with high-quality splatting. ABSTRACTWe propose a compute shader based point cloud rasterizer with up to 10 times higher performance than classic point-based rendering with the GL_POINT primitive. In addition to that, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(25 citation statements)
references
References 2 publications
0
25
0
Order By: Relevance
“…In this framework, the comparison between two models is done by means of integrated links. Another interesting open-source solution for rendering complex 3D scenarios is the framework that Potree developed by the Institute of Computer Graphics and Algorithms [42]. This tool, based on the WebGL technology, enables rendering point clouds through the use of an octree visualization system [42].…”
Section: D Visualization Methodologymentioning
confidence: 99%
“…In this framework, the comparison between two models is done by means of integrated links. Another interesting open-source solution for rendering complex 3D scenarios is the framework that Potree developed by the Institute of Computer Graphics and Algorithms [42]. This tool, based on the WebGL technology, enables rendering point clouds through the use of an octree visualization system [42].…”
Section: D Visualization Methodologymentioning
confidence: 99%
“…The georeferencing accuracy, evaluated for each hyperspectral image by collecting 10 GCPs per scene, extracting their real-world coordinates from the pseudo-orthophoto and calculating a root mean square error (RMSE) in X, Y, and Z direction, is between 4 and 28 cm ( Table 7). The integrative 3D datasets can be viewed and evaluated under [109] in a browser (Chrome, Firefox, Safari on desktop PCs and mobile devices) online using the WebGL-based Potree viewer [110]. The viewer allows for rendering of the 3D datasets (SfM point cloud, and terrestrial VNIR-SWIR MWL, terrestrial LWIR LSU, UAV-based MWL hyperclouds), as well as measurements of distance, area, and height profiles.…”
Section: D Integrationmentioning
confidence: 99%
“…External memory algorithms as a means to render arbitrary large 3D point clouds were initially introduced by Rusinkiewicz and Levoy (2000) and have since been adopted by numerous authors (Martinez-Rubi et al, 2016;Richter et al, 2015). Visual optimization techniques for 3D point clouds aim to reduce overdraw and underdraw alike, either preventing such visual artifacts by rendering points with an appropriate size and orientation (Schütz and Wimmer, 2015;Preiner et al, 2012) or eliminating them via image-based post-processing (Dobrev et al, 2010;Rosenthal and Linsen, 2008). While those approaches usually focus on non-immersive applications, Discher et al (2018a); Schütz (2016) discuss specific challenges and solutions regarding the visualization of 3D point clouds in VR environments.…”
Section: Related Workmentioning
confidence: 99%
“…To ensure an efficient subset retrieval, the data is hierarchically subdivided in a pre-processing step using spatial data structures such as octrees (Elseberg et al, 2013) or kd-trees (Goswami et al, 2013). In combination with web-based rendering concepts, these external memory algorithms allow the interactive inspection and visualization of arbitrary large 3D point clouds on a multitude of devices featuring vastly different CPU and GPU capabilities (Discher et al, 2018b;Schütz and Wimmer, 2015;Butler et al, 2014).…”
Section: Introductionmentioning
confidence: 99%