2021
DOI: 10.1145/3453720
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes

Abstract: For the success of video deblurring, it is essential to utilize information from neighboring frames. Most state-of-the-art video deblurring methods adopt motion compensation between video frames to aggregate information from multiple frames that can help deblur a target frame. However, the motion compensation methods adopted by previous deblurring methods are not blur-invariant, and consequently, their accuracy is limited for blurry frames with different blur amounts. To alleviate this problem, we propose two … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
52
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 57 publications
(61 citation statements)
references
References 42 publications
0
52
0
Order By: Relevance
“…These RNN-based models adopt fewer frames for temporal modeling, making the sharp information limited. VDTR also obtains 1.2dB higher PSNR compared to EDVR, which utilizes deformable convolution to align [11] and the state-of-the-art CNN-based video deblurring methods CDVDTSP [12] and PVDNet [44] (these two methods achieved highest PSNRs and SSIMs) on DVD [11] datasets. VDTR demonstrates strong competitiveness.…”
Section: B Results On Synthesized Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…These RNN-based models adopt fewer frames for temporal modeling, making the sharp information limited. VDTR also obtains 1.2dB higher PSNR compared to EDVR, which utilizes deformable convolution to align [11] and the state-of-the-art CNN-based video deblurring methods CDVDTSP [12] and PVDNet [44] (these two methods achieved highest PSNRs and SSIMs) on DVD [11] datasets. VDTR demonstrates strong competitiveness.…”
Section: B Results On Synthesized Datasetmentioning
confidence: 99%
“…All experiments are conducted on 8 NVIDIA Tesla V100 GPU with 32G memory. c) Evaluation Metrics: We compared VDTR with stateof-the-art convolution-based networks, including single image deblurring methods SRN [5], video deblurring methods DBN [11], DBLRNet [30], IFIRNN [29], EDVR [13], STFAN [14], ESTRNN [22], CDVDTSP [12], PVDNet [44] quantitatively and qualitatively. We adopt public available source codes for evaluation.…”
Section: Loss Functionmentioning
confidence: 99%
“…In this section, we take the model trained on the GoPro [28] using three blurry frames for reconstruction to analyze the effects of multiple components in our framework. [46] (c) TSP [30] (d) PVDNet [35] (e) PFAN (Ours) (f) Ground Truth…”
Section: Model Analysis and Discussionmentioning
confidence: 99%
“…To evaluate the performance of the proposed method, we compare it against state-of-the-art methods [9,24,28,30,[35][36][37]40,41,46]. To evaluate the quality of each restored image on the datasets, we use PSNR and SSIM as the evaluation metrics.…”
Section: Comparisons With State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation