2020 IEEE International Conference on Multimedia &Amp; Expo Workshops (ICMEW) 2020
DOI: 10.1109/icmew46912.2020.9106022
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning-Based Point Cloud Geometry Coding: RD Control Through Implicit and Explicit Quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(18 citation statements)
references
References 13 publications
0
18
0
Order By: Relevance
“…Other approaches were presented by Guarda et al in [ 108 ]. Initially, the authors proposed in [ 109 ] a simple convolutional network where the latent space is quantized in order to perform entropy coding.…”
Section: Compressionmentioning
confidence: 99%
See 1 more Smart Citation
“…Other approaches were presented by Guarda et al in [ 108 ]. Initially, the authors proposed in [ 109 ] a simple convolutional network where the latent space is quantized in order to perform entropy coding.…”
Section: Compressionmentioning
confidence: 99%
“…Initially, the authors proposed in [ 109 ] a simple convolutional network where the latent space is quantized in order to perform entropy coding. Later in [ 108 ], the approach was improved by implementing the hyperpriors technique [ 95 ] and by proposing an implicit/explicit quantization framework (see Figure 18 ). In implicit quantization, a deep learning model is optimized to minimize a given rate–distortion tradeoff while, in the explicit one, the latent space is quantized with different step values depending on the required quality/rate tradeoff.…”
Section: Compressionmentioning
confidence: 99%
“…Most of the learning-based models search for an optimum nonlinear transformation in terms of rate distortion trade-off during training time by using an appropriate differentiable objective function. For instance, the estimated entropy of a quantized latent vector can be used along with the reconstruction loss in an objective function [32,33]. Within the image domain, convolutional neural networks (CNNs) are used to extract the translationinvariant features that encapsulate spatial correlations among neighboring pixels.…”
Section: Learning-based Compressionmentioning
confidence: 99%
“…The impact of several parameters added to the initial version of the former network is evaluated in [13]. In [14], a rate-distortion performance analysis is conducted on the latent space of [12], which is enriched with a hyper-prior and the possibility of explicit quantization in [15]. In [16], a deeper auto-encoding architecture is proposed, based on 3D convolution layers stacked with Voxception-ResNet structures and a hyper-prior.…”
Section: Related Workmentioning
confidence: 99%