2021
DOI: 10.1109/tcsvt.2021.3072202
|View full text |Cite
|
Sign up to set email alerts
|

Quantization and Entropy Coding in the Versatile Video Coding (VVC) Standard

Abstract: The paper provides an overview of the quantization and entropy coding methods in the Versatile Video Coding (VVC) standard. Special focus is laid on techniques that improve coding efficiency relative to the methods included in the High Efficiency Video Coding (HEVC) standard: The inclusion of trellis-coded quantization, the advanced context modeling for entropy coding of transform coefficient levels, the arithmetic coding engine with multi-hypothesis probability estimation, and the joint coding of chroma resid… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 58 publications
(28 citation statements)
references
References 25 publications
0
28
0
Order By: Relevance
“…Computation throughput limitation incurred by the sequential data dependency in entropy context modeling was extensively investigated since the standardization of HEVC a decade ago. High-throughput and high-performance were then jointly evaluated during the development of entropy coding engine [22], [40]. Well-known examples include symbol parsing dependency unknitting, bins grouping, etc that more or less rely on the utilization of contextual correlation in local neighborhood.…”
Section: Entropy Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Computation throughput limitation incurred by the sequential data dependency in entropy context modeling was extensively investigated since the standardization of HEVC a decade ago. High-throughput and high-performance were then jointly evaluated during the development of entropy coding engine [22], [40]. Well-known examples include symbol parsing dependency unknitting, bins grouping, etc that more or less rely on the utilization of contextual correlation in local neighborhood.…”
Section: Entropy Modelmentioning
confidence: 99%
“…Referring to (4), the DCT is often used with quantization Q(•) to generate quantized transform coefficients ĉm of predictive residues r m = (I − I p m ), yielding noise augmented residues rm in decoder for reconstruction. Here we use E(•) as the generic entropy coding engine for the coding of coefficients ĉm and side information m. In VVC [40], [58], notable CABAC is often used. Note that HiPT enforces the use of casual neighbors from available upper and/or left blocks of current I in Fig.…”
Section: Theoretical Motivationmentioning
confidence: 99%
“…Each estimator is independently updated with different adaptation rates, which are pre-trained based on the statistics of the associated bins. Both codecs exploit arithmetic coding at this step [34].…”
Section: E Entropy Codingmentioning
confidence: 99%
“…Low-frequency non-separable transform (LFNST) [17,18] is newly adopted in VVC and is applied to the top-left low-frequency region (ROI) of the primary transformed coefficients. When the LFNST is applied, all primary transform coefficients excluding ROI are zeroed out [19,20], and the output of LFNST is further quantized and entropy-coded [21]. In this paper, we analyze the number of multiplications of the existing fast transform methods in the VVC standard, and we propose a new fast inverse transform using the number of non-zero coefficients based on linearity to reduce the number of multiplications.…”
Section: Introductionmentioning
confidence: 99%