2022
DOI: 10.1155/2022/6394788
|View full text |Cite
|
Sign up to set email alerts
|

An Image Deblurring Method Using Improved U-Net Model

Abstract: Deblurring methods in dynamic scenes are a challenging problem. Recently, significant progress has been made for image deblurring methods based on deep learning. However, these methods usually stack ordinary convolutional layers or increase convolution kernel size, resulting in limited receptive fields, an unsatisfying deblurring effect, and a heavy computational burden. Therefore, we propose an improved U-Net (U-shaped Convolutional Neural Network) model to restore the blurred images. We first design the mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…Their approach, utilizing a residual module, a cascade cross-attention module, and a two-scale discriminator module, enhances detail processing. Lian et al [3] employ a U-Net-based [4] method incorporating attention mechanisms and depth-wise separable convolutions, focusing mainly on local details. Cui et al [5] propose a novel dual-domain attention mechanism, combining spatial and frequency attention modules, thus addressing both local and frequency-dependent aspects of images.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Their approach, utilizing a residual module, a cascade cross-attention module, and a two-scale discriminator module, enhances detail processing. Lian et al [3] employ a U-Net-based [4] method incorporating attention mechanisms and depth-wise separable convolutions, focusing mainly on local details. Cui et al [5] propose a novel dual-domain attention mechanism, combining spatial and frequency attention modules, thus addressing both local and frequency-dependent aspects of images.…”
Section: Related Workmentioning
confidence: 99%
“…Traditional deep CNNs often struggle with capturing global information due to their limited receptive fields, as highlighted by Chen et al [13]. To address this, some researchers, like Lian et al [3], recommend using convolution with a larger receptive field for better global information comprehension, thereby enhancing deblurring effectiveness. Additionally, attention mechanisms, as proposed by Cui et al [5], have been integrated to more precisely focus on critical image areas for detailed information capture.…”
Section: Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…Lian et al [7] explained wavelet transformations, inverse wavelet transforms, residual depth-wise separable convolution and a DMRFC (dense multi 7 receptive field channel) module. A convolution which is depth-wise separable is created.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Chen et al [10] embedded smooth dilated convolution to the network, while keeping the number of parameters in receptive fields constant to improve network performance. Zuozheng Lian et al [11] introduced an enhanced U-Net, incorporating depth-wise separable convolutions, residual depth-wise separable convolutions, and wavelet transform. This approach enables the extraction of finer image details while simultaneously reducing computational complexity.…”
Section: Related Workmentioning
confidence: 99%