2017
DOI: 10.1109/tip.2017.2691802
|View full text |Cite
|
Sign up to set email alerts
|

Clearing the Skies: A Deep Network Architecture for Single-Image Rain Removal

Abstract: Abstract-We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledg… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
551
0
3

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 786 publications
(556 citation statements)
references
References 32 publications
2
551
0
3
Order By: Relevance
“…The rationale behind the four regularization terms is explained by Figure , taking a synthetic image as an example. The synthesis rendered rain‐streak‐like noises into images through graphics‐editing software (i.e., Adobe® Photoshop) following previous studies (Fu et al, ; Kang et al, ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The rationale behind the four regularization terms is explained by Figure , taking a synthetic image as an example. The synthesis rendered rain‐streak‐like noises into images through graphics‐editing software (i.e., Adobe® Photoshop) following previous studies (Fu et al, ; Kang et al, ).…”
Section: Methodsmentioning
confidence: 99%
“…The rationale behind the four regularization terms is explained by Figure 2, taking a synthetic image as an example. The synthesis rendered rain-streak-like noises into images through graphics-editing software (i.e., Adobe® Photoshop) following previous studies (Fu et al, 2017;Kang et al, 2012). Figure 2a shows the decomposition of the rain-contained image (Figure 2a-1) into a rain-free background layer (Figure 2a-2) and a rain-streak layer (Figure 2a-3), assuming vertical raindrops in a windless scene.…”
Section: Optimization Model For the Decompositionmentioning
confidence: 99%
“…8, we show the de-raining results produced from two rain images. In the first row, the deep learning based method, i.e., Fu17 [14], performs well in removing the directional rain streaks and preserving the contours of window and balcony, whose MOS could reach 97.97. By contrast, the dictionary learning based method, such as, Kang12 [10], clearly over smoothes the original image structure, whose MOS is only 37.63.…”
Section: Ding16mentioning
confidence: 99%
“…By contrast, the dictionary learning based method, such as, Kang12 [10], clearly over smoothes the original image structure, whose MOS is only 37.63. When we change the Ding16 [6] Kang12 [9] Luo15 [10] Li16 [11] Deng17 [20] Fu17 [13] rain image to the second row, it is seen that Fu17 [14] almost does nothing for the dot-like raindrops, whose MOS drops to 40.95. While, Kang12 [10] could perfectly remove these small raindrops without obvious damage to the contour of the player, whose MOS rises to 99.68.…”
Section: Ding16mentioning
confidence: 99%
See 1 more Smart Citation