2017 9th IEEE-GCC Conference and Exhibition (GCCCE) 2017
DOI: 10.1109/ieeegcc.2017.8448075
|View full text |Cite
|
Sign up to set email alerts
|

Accelerated Fog Removal from Real Images for Car Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…According to visibility, we classified foggy weather into one of four conditions, as shown in Table 1 . We determined the visibility for each individual traffic scenario in the experiment based on [ 2 , 9 , 10 ]. In clear weather, the visibility distance is greater than 1000 m, while that in light fog is 500–800 m, in medium fog is 300–500 m, and in heavy fog is 50–200 m. The total number of images is 25,000, with 80% (20,000) belonging to the training set and the remaining 5000 for testing and verification.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…According to visibility, we classified foggy weather into one of four conditions, as shown in Table 1 . We determined the visibility for each individual traffic scenario in the experiment based on [ 2 , 9 , 10 ]. In clear weather, the visibility distance is greater than 1000 m, while that in light fog is 500–800 m, in medium fog is 300–500 m, and in heavy fog is 50–200 m. The total number of images is 25,000, with 80% (20,000) belonging to the training set and the remaining 5000 for testing and verification.…”
Section: Resultsmentioning
confidence: 99%
“…Machine vision in fog can fall as low as 1000 m in moderate fog and as low as 50 m in heavy fog [ 9 , 10 ]. Camera sensors are one of the significant sensors used for object detection because of their low cost and the large number of features they provide [ 11 ].…”
Section: Introductionmentioning
confidence: 99%
“…In order to demonstrate these benefits, we apply our knowledge on two automotive case studies used in modern vehicles' environment perception: a model-based generated safety-critical automotive task, implementing a sobel filter for edge detection and a pedestrian detection task [21]. The former, edge detection, is very common in both ADAS (Advanced Driving Assistance Systems) and autonomous driving for numerous tasks such as lane departure [17], sign [22] and car detection [27]. Pedestrian detection is also used for ADAS, eg.…”
Section: Exploiting the Knowledge Of Gpu Allocators In Automotive Casmentioning
confidence: 99%
“…In order to demonstrate these benefits, we apply our knowledge on two automotive case studies used in modern vehicles' environment perception: a model-based generated safety-critical automotive task, implementing a sobel filter for edge detection and a pedestrian detection task [12]. The former, edge detection, is very common in both ADAS (Advanced Driving Assistance Systems) and autonomous driving for numerous tasks such as lane departure [13], sign [14] and car detection [15]. Pedestrian detection is also used for ADAS, eg.…”
Section: B Exploiting the Knowledge Of Cuda Allocators In Automotive Case Studies' Resource Provisioningmentioning
confidence: 99%