2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00714
|View full text |Cite
|
Sign up to set email alerts
|

Non-Local ConvLSTM for Video Compression Artifact Reduction

Abstract: Video compression artifact reduction aims to recover high-quality videos from low-quality compressed videos. Most existing approaches use a single neighboring frame or a pair of neighboring frames (preceding and/or following the target frame) for this task. Furthermore, as frames of high quality overall may contain low-quality patches, and high-quality patches may exist in frames of low quality overall, current methods focusing on nearby peak-quality frames (PQFs) may miss high-quality details in low-quality f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
53
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 75 publications
(53 citation statements)
references
References 41 publications
0
53
0
Order By: Relevance
“…These irreversible compression algorithms often introduce compression artifacts that degrade the quality of experience (QoE), especially for videos. Accordingly, video compression artifact removal, which aims to reduce the introduced artifact and recover details for lossy compressed videos, becomes a hot topic in the multimedia field [11,28,7].…”
Section: Instructionmentioning
confidence: 99%
“…These irreversible compression algorithms often introduce compression artifacts that degrade the quality of experience (QoE), especially for videos. Accordingly, video compression artifact removal, which aims to reduce the introduced artifact and recover details for lossy compressed videos, becomes a hot topic in the multimedia field [11,28,7].…”
Section: Instructionmentioning
confidence: 99%
“…The main idea is to utilize the less noisy previously restored frames instead of directly decoded frames as temporal references. Xu et al [25] introduced an non-local strategy in ConvLSTM to trace the spatiotemporal dependency in a video sequence, and achieved the state-of-theart performance. Zhang et al [26], [27] proposed to restore talking-head videos using information from the audio stream and structural information given by the video encoder.…”
Section: Related Workmentioning
confidence: 99%
“…Thermal noise [11] • • environment, electronics Salt and pepper noise [7] • • electronics Random telegraph noise [4] • • electronics Temporal contrast/ brightness inconsistencies [12] • • electronics, environment, software homomorphic filtering [13], stabilization algorithms [14], temporal filtering [12], neural networks [15] Line, stripe, wave and ring artifacts [16,17] • • electronics, environment, optics wavelet/Fourier filtering [10], spatial filtering [16], neural networks [18] Compression artifacts [19] • • software bilateral filtering [8], fuzzy filtering [20] neural networks [19,[21][22][23] Projective distortions [24] • • optics model-based calculations [25], neural networks [26,27] Out-of-focus effects [28,29] • • optics morphological filtering [30], neural networks [31,32] Fixed pattern noise [33,34] • • electronics, environment, optics reference imaging [33], neural networks [35] Aliasing [36] • • software anti-aliasing algorithms [36], neural networks [37] Rolling shutter effects [38] • • electronics neural networks [39] Artifacts are visually recognizable in a variety of shapes and intensities. Table 1 shows common artifact types occurring in sensor images, their sources, and algorithmic example methods which can be used to...…”
Section: Introductionmentioning
confidence: 99%