2012
DOI: 10.1016/j.cag.2012.03.017
|View full text |Cite
|
Sign up to set email alerts
|

Encoding normal vectors using optimized spherical coordinates

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…In this paper, we propose a compression scheme for the compression of triplets of single-precision numbers which occur frequently in computational physics. This method extends the work of Smith et al [20] to 3D vectors of arbitrary length. These techniques allow for unstructured data access and in-line compression and decompression, with approximate symmetry in compression/decompression times.…”
Section: Introductionmentioning
confidence: 52%
See 1 more Smart Citation
“…In this paper, we propose a compression scheme for the compression of triplets of single-precision numbers which occur frequently in computational physics. This method extends the work of Smith et al [20] to 3D vectors of arbitrary length. These techniques allow for unstructured data access and in-line compression and decompression, with approximate symmetry in compression/decompression times.…”
Section: Introductionmentioning
confidence: 52%
“…It has also been shown by Meyer et al [19] that 51-bits is sufficient to losslessly represent unit vectors formed of three 32bit floating-point numbers. Some more recent work presented by Smith et al [20] looked to apply lossy compression to this case with increased efficiency. The approach taken quantised 3D unit vectors by initially transforming them into spherical coordinates, and subsequently discretising the two angles in bins.…”
Section: Introductionmentioning
confidence: 99%
“…For run-time access during rendering, we adopt a simple table-based quantization technique where a preprocessed lookup table, indexed with a 16-bit unsigned integer, provides up to 65,536 (=2 16 ) discrete unit normal samples. To fill the table, we use the normal encoding/decoding method of Smith et al [24], which exhibits the lowest mean and maximum angular errors among a variety of 16-bit unit normal representation techniques [25]. Once a lookup table is prepared, the original 12-byte unit normal (n x , n y , n z ) of a vertex is transformed into a two-byte index whose corresponding direction is closest to that of the original vector.…”
Section: Quantization Of Vertex Attributesmentioning
confidence: 99%
“…A deep neural network, also based on Octree data, was then introduced in [12]. Notably, a fast compression algorithm was developed in [20] considering spherical voxels, while the Moving Picture Expert Group (MPEG) has released specifications for the videobased (V-PCC) and the geometry-based (G-PCC) point cloud compression standards [21].…”
Section: B 3d Data Compressionmentioning
confidence: 99%
“…HIGH (1 mm 2 ) Table I: List of Octree compression profiles according to the PCL [23]. one point (1).Then, all non-zero bytes are saved in breadthfirst order [20]. PCL offers 12 different resolution profiles (corresponding to 12 different levels of compression), which can be grouped into 3 categories, i.e., HIGH, MEDIUM, and LOW, as reported in Table I.…”
Section: B 3d Data Compressionmentioning
confidence: 99%