2023
DOI: 10.3390/app13042709
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Scale Aggregation Residual Channel Attention Fusion Network for Single Image Deraining

Abstract: Images captured on rainy days are prone to rain streaking on various scales. These images taken on a rainy day will be disturbed by rain streaks of varying degrees, resulting in degradation of image quality. This study sought to eliminate rain streaks from images using a two-stage network architecture involving progressive multi-scale recovery and aggregation. The proposed multi-scale aggregation residual channel attention fusion network (MARCAFNet) uses kernels of various scales to recover details at various … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 69 publications
0
1
0
Order By: Relevance
“…In [54], [55], an end-to-end recurrent multilevel residual and global attention network and multi-scale context information and attention network are proposed. In [56] multi-scale aggregation residual channel attention fusion network (MARCAFNet) is proposed. A lightweight semisupervised network (LSNet) for single image deraining is proposed by [57].…”
Section: B Data-driven Image Deraining Methodsmentioning
confidence: 99%
“…In [54], [55], an end-to-end recurrent multilevel residual and global attention network and multi-scale context information and attention network are proposed. In [56] multi-scale aggregation residual channel attention fusion network (MARCAFNet) is proposed. A lightweight semisupervised network (LSNet) for single image deraining is proposed by [57].…”
Section: B Data-driven Image Deraining Methodsmentioning
confidence: 99%
“…For example, interactive attention can learn to find key features and salient parts from the input data to achieve better task effects [32], and multi-head attention [33] runs multiple attention mechanisms on the same input data and merges the results. The channel attention [34,35,36,37] that is mainly studied in this paper, can find the specific data in complex data, and improve the accuracy and efficiency of the model by learning how to adjust the weight of each channel in the input data.…”
Section: B Attention Mechanismmentioning
confidence: 99%
“…Nasnetmobile is a pretrained neural network model [34] that has been trained using transfer learning. Transfer learning is a method that involves the knowledge transfer learned from a pretrained model to a new task.…”
Section: Feature Extraction Using Proposed Cnnmentioning
confidence: 99%