2022
DOI: 10.1109/access.2022.3218456
|View full text |Cite
|
Sign up to set email alerts
|

STCN-Net: A Novel Multi-Feature Stream Fusion Visibility Estimation Approach

Abstract: Low visibility always leads to serious traffic accidents worldwide, although extensive works are studied to deal with the estimation of visibility in meteorology areas, it is still a tough problem. Deep learning-based visibility estimation methods, it has low accuracy due to lacking "specific features" of the foggy images. Meanwhile, physical modelbased visibility estimation methods are only applicable to some specific scenes due to its high requirements for extra auxiliary parameters. Therefore, This paper pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…To visualize the focused regions of FGS-Net in the image when extracting features from the input image. In this paper, we use the grad-cam to make the regions visualization where the network focus on when performing visibility estimation [43]. We draw the results in Figure 14, and in these image, the darker color represents a larger weight value and also means that the network pays more attention to this region.…”
Section: ⅴ Relevant Data and Results Displaymentioning
confidence: 99%
See 2 more Smart Citations
“…To visualize the focused regions of FGS-Net in the image when extracting features from the input image. In this paper, we use the grad-cam to make the regions visualization where the network focus on when performing visibility estimation [43]. We draw the results in Figure 14, and in these image, the darker color represents a larger weight value and also means that the network pays more attention to this region.…”
Section: ⅴ Relevant Data and Results Displaymentioning
confidence: 99%
“…Finally, we fuse with features from statistics and the Transformer to estimate the visibility through the FC(fully connected) layer. At last, to verify the effectiveness and superiority of our method, We evaluate our approach on two visibility datasets: VID Ⅰ and VID Ⅱ [3,4,43]. Experimental results show that our method performs better than Giyenko [5], Fatma [6], and various classical methods based on deep learning [7,8,9,10].…”
mentioning
confidence: 97%
See 1 more Smart Citation
“…We compared the proposed method with several deep-learning-based methods, including two image classification methods (AlexNet [ 21 ] and VGG16 [ 22 ]) and two atmospheric visibility estimation methods (relative CNN-RNN [ 14 ] and STCN-Net [ 23 ]). We re-trained these three deep-learning-based methods on our dataset, where the parameters were set according to the recommendations in the paper.…”
Section: Methodsmentioning
confidence: 99%
“…Moreover, research in [40] concentrates on daytime and nighttime conditions. There was also research focused on estimating visibility [41].…”
mentioning
confidence: 99%