2020
DOI: 10.1609/aaai.v34i07.6862
|View full text |Cite
|
Sign up to set email alerts
|

Region-Adaptive Dense Network for Efficient Motion Deblurring

Abstract: In this paper, we address the problem of dynamic scene deblurring in the presence of motion blur. Restoration of images affected by severe blur necessitates a network design with a large receptive field, which existing networks attempt to achieve through simple increment in the number of generic convolution layers, kernel-size, or the scales at which the image is processed. However, these techniques ignore the non-uniform nature of blur, and they come at the expense of an increase in model size and inference t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
82
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 113 publications
(84 citation statements)
references
References 26 publications
1
82
0
1
Order By: Relevance
“…The architecture of the used self-attention unit [17] is shown in Figure 1(c). The input features are transformed into G and H via 1D convolution, and then generate the attention weights W from G and H by…”
Section: Temporal Convolutional Attention Networkmentioning
confidence: 99%
“…The architecture of the used self-attention unit [17] is shown in Figure 1(c). The input features are transformed into G and H via 1D convolution, and then generate the attention weights W from G and H by…”
Section: Temporal Convolutional Attention Networkmentioning
confidence: 99%
“…Liu et al [82] merged self-attention mechanism with RNN, and improved the robustness of network by utilizing RNN's characteristics of information broadcast. Purohit et al [41] proposed a motion deblur network with fast execution speed and strong performance ability by combining deformable convolution and self-attention mechanism. In this paper, we used a modified self-attention mechanism as that used in [54] and based on it we designed a branch-attention mechanism to broadcast information between all the restoration branches.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…Many popular restoration methods based on CNNs tend to give preference to a U-net structure [40]. A normal part in this structure is the encoder-decoder module [41], [42]. Firstly, the encoder performs downsamplings on input features through pooling operation, and then the decoder does the corresponding up-sample operations to restore features.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Feature extraction has been proved as a key procedure in the image analysis tasks, including SR, de-blurring [35], and image semantic segmentation [36]. With the modern machine learning technology, the intrinsic features are automatically learned by feeding the ''labelled'' data-sets, and the residual learning approach, such as residual network (ResNet), is reported to achieve the state-of-the-art performance for feature extraction in the image recognition tasks [37].…”
Section: B Recurrent Learning Aided Feature Extractionmentioning
confidence: 99%