Proceedings of the 26th ACM International Conference on Multimedia 2018
DOI: 10.1145/3240508.3240636
|View full text |Cite
|
Sign up to set email alerts
|

Non-locally Enhanced Encoder-Decoder Network for Single Image De-raining

Abstract: Traffic flow prediction is crucial for urban traffic management and public safety. Its key challenges lie in how to adaptively integrate the various factors that affect the flow changes. In this paper, we propose a unified neural network module to address this problem, called Attentive Crowd Flow Machine (ACFM), which is able to infer the evolution of the crowd flow by learning dynamic representations of temporally-varying data with an attention mechanism. Specifically, the ACFM is composed of two progressive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
147
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 228 publications
(147 citation statements)
references
References 77 publications
0
147
0
Order By: Relevance
“…Table 8 shows the results. Those for the previous methods except RESCAN [25] are imported from [23]. It is seen that the proposed network achieves the best performance.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Table 8 shows the results. Those for the previous methods except RESCAN [25] are imported from [23]. It is seen that the proposed network achieves the best performance.…”
Section: Resultsmentioning
confidence: 99%
“…Li et al [25] regards a heavy rainy image as a clear image added by an accumulation of multiple rainstreak layers and proposed a RNN-based method to restore the clear image. Li et al [23] proposed an non-locally enhanced version of DenseBlock [16] for this task, their network outperforms previous approaches by a good margin. Figure 3: Four different implementations of the DuRB; c is a convolutional layer with 3×3 kernels; ct l 1 and ct l 2 are convolutional layers, each with kernels of a specified size and dilation rate; up is up-sampling (we implemented it using PixelShuffle [38]); se is SE-ResNet Module [15] that is in fact a channel-wise attention mechanism.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, we created a training set by randomly sampling the standard deviations of Gaussian noise and the quality of JPEG compression from [0, 20] and [60, 100], respectively and sampling the max length of trajectories for motion blur from [10,40]. We then apply trained models to a test set with a different range of distortion parameters; the test set is created by sampling the standard deviations of Gaussian noise, the quality of JPEG compression, and the max trajectory length from [20,40], [15,60], and [40,80], respectively. We used the same image set used in the ablation study for the base image set.…”
Section: Performance On Novel Strengths Of Distortionmentioning
confidence: 99%
“…Then we added Gaussian noise to them, yielding a training set and a testing set which consist of 50, 000 and 1, 000 patches, respectively. The standard deviation of the Gaussian noise is randomly chosen from the range of [10,20]. Using these datasets, we compare our method with DnCNN [51], FFDNet [52], and E-CAE [37], which are the state-of-the-art dedicated models for this task.…”
Section: A1 Noise Removalmentioning
confidence: 99%