2020
DOI: 10.1109/access.2020.3028157
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Information Supplementary Pyramid Network for Dynamic Scene Deblurring

Abstract: The algorithm in this paper is called semantic information supplementary pyramid network(SIS-net). We choose Generative Adversarial Network (GAN) as its fundamental model. SIS-net's generator imitates the feature pyramid network (FPN) structure to recycle features spanning across multiple receptive scales to restore a sharp image. However, to solve the problem caused by the phenomenon of semantic dilution in the FPN network, we have innovatively designed a semantic information supplement(SIS) mechanism. SIS me… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 31 publications
(106 reference statements)
0
5
0
Order By: Relevance
“…DeepDeblur exploited a multi-scale CNN to restore sharp image, which may cause the heavy parameters. Through introducing the strategies of parameter sharing [ 10 , 11 , 13 , 20 ], GAN [ 15 , 16 ], hierarchical multi-patch [ 12 , 45 ], optical flow [ 21 ] and motion offsets [ 46 ], the number of parameters can be effectively reduced in other comparison methods. Among them, Gao et al [ 11 ] got the smallest number of parameters because it employed a nested skip connection structure for the nonlinear transformation modules to replace stacked convolution layers or residual blocks.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…DeepDeblur exploited a multi-scale CNN to restore sharp image, which may cause the heavy parameters. Through introducing the strategies of parameter sharing [ 10 , 11 , 13 , 20 ], GAN [ 15 , 16 ], hierarchical multi-patch [ 12 , 45 ], optical flow [ 21 ] and motion offsets [ 46 ], the number of parameters can be effectively reduced in other comparison methods. Among them, Gao et al [ 11 ] got the smallest number of parameters because it employed a nested skip connection structure for the nonlinear transformation modules to replace stacked convolution layers or residual blocks.…”
Section: Methodsmentioning
confidence: 99%
“…With the rapid development of deep learning, many convolutional neural network (CNN) based methods [ 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ] have been proposed for effective image deblurring [ 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 23 ] and deraining [ 19 , 22 , 24 , 25 , 26 , 27 , 28 ]. Compared with the traditional shallow models [ 1 , 2 , 3 , 4 , 5 , 6 ], these deep CNN methods do not need to estimate blur kernel or rain streaks, but directly predict clear images from the degradation ones.…”
Section: Introductionmentioning
confidence: 99%
“…PSNR, SSIM, model size and inference time are adopted to evaluate our method. In order to prove the effectiveness of our proposed method, we compare the performance of our approach with some state-of-the-art deblurring methods [4,5,6,7,8,10,11,27,28] on the GoPro dataset.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…Methods PSNR SSIM Model Size Time DeepDeblur [4] 29.08 0.9135 303.6M 15s Tao et al [5] 30.10 0.9323 33.6M 1.6s Gao et al [6] 30.92 0.9421 2.84M 1.6s DMPHN [7] 30.21 0.9345 21.7M 0.03s Zhang et al [8] 29.19 0.9306 37.1M 1.4s DeblurGAN [10] 28.70 0.927 37.1M 0.85s DeblurGANv2 [11] 29.55 0.934 15M 0.35s SIS [27] 30.28 0.912 36.54M 0.303s Yuan et al [28] 29.81 0.9368 3.1M 0.01s LMFN(Ours) 31.54 0.923 1.25M 0.019s…”
Section: Quantitative and Qualitative Evaluationmentioning
confidence: 99%
See 1 more Smart Citation