2021
DOI: 10.48550/arxiv.2111.13850
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Temporal Context Mining for Learned Video Compression

Abstract: We address end-to-end learned video compression with a special focus on better learning and utilizing temporal contexts. For temporal context mining, we propose to store not only the previously reconstructed frames, but also the propagated features into the generalized decoded picture buffer. From the stored propagated features, we propose to learn multi-scale temporal contexts, and re-fill the learned temporal contexts into the modules of our compression scheme, including the contextual encoder-decoder, the f… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(7 citation statements)
references
References 13 publications
0
7
0
Order By: Relevance
“…Li et al proposed learning feature domain contexts as condition. Its following works [29,50] adopt feature propagation to boost performance.…”
Section: Neural Video Compressionmentioning
confidence: 99%
See 4 more Smart Citations
“…Li et al proposed learning feature domain contexts as condition. Its following works [29,50] adopt feature propagation to boost performance.…”
Section: Neural Video Compressionmentioning
confidence: 99%
“…The supplementary materials show the results using BT.601. In addition, we follow [29,50] and also test HEVC RGB dataset [15] when testing RGB videos, and there is no format change as HEVC RGB dataset itself is in RGB format.…”
Section: Experimental Settingsmentioning
confidence: 99%
See 3 more Smart Citations