2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
DOI: 10.1109/wacv56688.2023.00068
|View full text |Cite
|
Sign up to set email alerts
|

Domain Adaptive Object Detection for Autonomous Driving under Foggy Weather

Abstract: Typically, object detection methods for autonomous driving that rely on supervised learning make the assumption of a consistent feature distribution between the training and testing data, however such assumption may fail in different weather conditions. Due to the domain gap, a detection model trained under clear weather may not perform well in foggy and rainy conditions. Overcoming detection bottlenecks in foggy and rainy weather is a real challenge for autonomous vehicles deployed in the wild. To bridge the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 64 publications
(20 citation statements)
references
References 81 publications
0
20
0
Order By: Relevance
“…Clearly, Due to all methods unseen these data with V2V implementation gap in the training phase, their detection performance is even worse than single-agent perception baseline NO Fusion on for AP@0. [28] and AdvGRL [14], all methods have the improvement of performance. For example, V2X-ViT improved by 12.1%/5.3% for AP@0.5/0.7 with GRL, and 12.7%/5.8% for AP@0.5/0.7 with AdvGRL.…”
Section: Quantitaive Evaluationmentioning
confidence: 90%
See 1 more Smart Citation
“…Clearly, Due to all methods unseen these data with V2V implementation gap in the training phase, their detection performance is even worse than single-agent perception baseline NO Fusion on for AP@0. [28] and AdvGRL [14], all methods have the improvement of performance. For example, V2X-ViT improved by 12.1%/5.3% for AP@0.5/0.7 with GRL, and 12.7%/5.8% for AP@0.5/0.7 with AdvGRL.…”
Section: Quantitaive Evaluationmentioning
confidence: 90%
“…To demonstrate the significant effect of implementation gap and domain gap, we first train these methods on perfect setting of OPV2V training set, Then, these methods are evaluated on Noisy Setting of OPV2V testing set and V2V4Real testing set to assess their performance. In addition, to show the effectiveness of our proposed domain adaptation modules on Feature-Gap scenario, two domain adaptation methods i.e., gradient reverse layer (GRL) [28] and adversarial gradient reverse layer (AdvGRL) [14] are utilized to backpropagate the gradient to assist the model for generating domain-invariant features by two domain classifiers including feature-level and objectlevel classifiers.…”
Section: B Experiments Setupmentioning
confidence: 99%
“…Furthermore, there are domain adaptation methods that aim to address this problem. For example, Li et al 38 designed a domain adaptive object detection framework for autonomous driving under hazy weather to minimize the domain gap between sunny and foggy images at both image and object levels. However, these methods still face challenges in detecting small objects, such as vehicles at long distances on mountain highways due to low-resolution features, resulting in a high missed detection rate.…”
Section: Object Detection In Hazy Motorway Environmentsmentioning
confidence: 99%
“…Furthermore, there are domain adaptation methods that aim to address this problem. For example, Li et al 38 . designed a domain adaptive object detection framework for autonomous driving under hazy weather to minimize the domain gap between sunny and foggy images at both image and object levels.…”
Section: Related Workmentioning
confidence: 99%
“…However, fog images with different concentrations have different characteristics, and existing studies tend to ignore this diversity. Previous domain adaptive methods only consider migration learning between the source and target domains while ignoring the joint optimization capability between image defogging, image enhancement, and object detection [ 15 ].…”
Section: Introductionmentioning
confidence: 99%