2023
DOI: 10.20944/preprints202305.2180.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Camera-Radar Fusion with Attention Framework for Autonomous Vehicle Vision in Foggy Weather Conditions

Abstract: AVs suffer reduced maneuverability and performance due to the degradation in sensor performances in fog. Such degradation causes significant object detection errors essential for AVs' safety-critical conditions. For instance, YOLOv5 performs significantly well under favorable weather but suffers miss detections and false positives due to atmospheric scattering caused by fog particles. Existing deep object detection techniques often exhibit a high degree of accuracy. The drawback is being sluggish at o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 67 publications
0
6
0
Order By: Relevance
“…In 2023, Ogunrinde, I.O. et al, [16] design a CR-YOLOnet network using A multi-sensor fusion network based on YOLOv5 that combines radar object recognition with camera image bounding box. To train and test our multisensor fusion network using CARLA simulator clear and multifog weather datasets.…”
Section: Literature Surveymentioning
confidence: 99%
“…In 2023, Ogunrinde, I.O. et al, [16] design a CR-YOLOnet network using A multi-sensor fusion network based on YOLOv5 that combines radar object recognition with camera image bounding box. To train and test our multisensor fusion network using CARLA simulator clear and multifog weather datasets.…”
Section: Literature Surveymentioning
confidence: 99%
“…AlexNet, suggested by Krizhvsky et al [37], was the first convolutional network used for image feature extraction, ushering in the current era of deep feature extraction. In our previous work [31], we did a comprehensive review of camera-only along with camera and radar fusion-based object detection methods. Some of the camera-only approaches include SSD proposed by Liu et al [38], YOLO proposed by Redmon et al [39], and its derivatives [40][41][42][43][44], RCNN proposed by Girshick et al [45], and its derivatives [46][47][48].…”
Section: Object Detectionmentioning
confidence: 99%
“…Because of the tradeoff between detection speed and accuracy, existing methods have a very limited range of use in foggy weather conditions. Recently, we proposed a deep learning-based radar and camera fusion (CR-YOLOnet) [31]based on YOLOv5 [44] for object detection in foggy weather conditions. In [31], we gave a comprehensive overview of YOLOv5.…”
Section: Object Detectionmentioning
confidence: 99%
See 2 more Smart Citations