2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00837
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Scale Progressive Fusion Network for Single Image Deraining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
321
0
2

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 542 publications
(323 citation statements)
references
References 35 publications
0
321
0
2
Order By: Relevance
“…Jiang et al [45] proposed to decompose rain streaks into multiple rain layers and individually estimate each of them along the network stages to cope with the deraining problem. Jiang et al [46] applied the multi-scale collaborative representation for rain streaks from the perspective of input image scales and hierarchical deep features in a unified framework to remove rain streaks.…”
Section: B Single-image Based Methodsmentioning
confidence: 99%
“…Jiang et al [45] proposed to decompose rain streaks into multiple rain layers and individually estimate each of them along the network stages to cope with the deraining problem. Jiang et al [46] applied the multi-scale collaborative representation for rain streaks from the perspective of input image scales and hierarchical deep features in a unified framework to remove rain streaks.…”
Section: B Single-image Based Methodsmentioning
confidence: 99%
“…This motivates us to incorporate the raindrop removal method as a form of preprocessing into high-level applications. In this paper, following [ 45 ], we introduce pre-trained models of PSPNet [ 46 ] (for semantic segmentation) and Faster R-CNN [ 47 ] (for object detection) trained on the Cityscapes dataset to perform an evaluation of segmentation and detection precision, respectively. Table 5 tabulates the accuracy of segmentation under different deraining methods on the RaindropCityscapes dataset, in terms of the mean Intersection of Union (mIoU) and mean Accuracy of each class (mAcc).…”
Section: Methodsmentioning
confidence: 99%
“…In particular, L MAE is defined as e high-frequency detail information is easily destroyed in the process of image dehazing. To further improve the fidelity and authenticity of details, we propose an additional edge loss function [36] to limit the high-frequency components, for example, edge and texture. L Edge can be written as…”
Section: Journal Of Advanced Transportationmentioning
confidence: 99%
“…In addition, the Total Variation (TV) loss function [37] is exploited to suppress the pixel-jump problem, which can be given bywhere ∇ h and ∇ v represent the operators of the horizontal and vertical gradients, respectively. We refer interested readers to [35][36][37] for more details on calculations of MS-SSIM, edge loss, and TV. To sum up, the total loss function can be written as follows:…”
Section: Journal Of Advanced Transportationmentioning
confidence: 99%