2021
DOI: 10.1007/978-3-030-69535-4_24
|View full text |Cite
|
Sign up to set email alerts
|

Low-Level Sensor Fusion for 3D Vehicle Detection Using Radar Range-Azimuth Heatmap and Monocular Image

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(20 citation statements)
references
References 20 publications
0
20
0
Order By: Relevance
“…Therefore, feature map fusion utilises two encoders to map radar and images into the same latent space with high-level semantics. The detection frameworks are flexible, including one-stage methods [179,180] and two-stage methods [33,181,182]. The one-stage methods leverage two branches of neural networks to extract feature maps from radar and images, respectively, and then concatenate the feature maps together.…”
Section: Feature Map Fusionmentioning
confidence: 99%
“…Therefore, feature map fusion utilises two encoders to map radar and images into the same latent space with high-level semantics. The detection frameworks are flexible, including one-stage methods [179,180] and two-stage methods [33,181,182]. The one-stage methods leverage two branches of neural networks to extract feature maps from radar and images, respectively, and then concatenate the feature maps together.…”
Section: Feature Map Fusionmentioning
confidence: 99%
“…For 3D object detection, the authors of [18] propose GRIF-Net to fuse radar and camera data. After individual processing, the feature fusion is performed by a gated region of interest fusion (GRIF).…”
Section: Camera Radar Fusionmentioning
confidence: 99%
“…The primary reason behind autonomous cars not being more commonplace is their dependence on lidars, cameras, or their fusion, which are unable to perform robustly in cases of occlusions and adverse weather conditions [2,11]. This shortcoming, in cameras and lidar, has sparked a major interest in automotive radar-based sensing, particularly in camera radar fusion systems [17,26,28].…”
Section: Introductionmentioning
confidence: 99%
“…More advanced and state-of-the-art approaches perform fusion at feature level. For example AVOD-fusion [17] first simultaneously extracts features from camera perspective and radar bird's eye view (BEV) and then fuses them on per object basis, to take advantage of both the sensors in their views. However, we find that the simultaneous feature extraction and fusion approach does not account for the cases where camera become unreliable.…”
Section: Introductionmentioning
confidence: 99%