2023
DOI: 10.1609/aaai.v37i2.25270
|View full text |Cite
|
Sign up to set email alerts
|

RADIANT: Radar-Image Association Network for 3D Object Detection

Abstract: As a direct depth sensor, radar holds promise as a tool to improve monocular 3D object detection, which suffers from depth errors, due in part to the depth-scale ambiguity. On the other hand, leveraging radar depths is hampered by difficulties in precisely associating radar returns with 3D estimates from monocular methods, effectively erasing its benefits. This paper proposes a fusion network that addresses this radar-camera association challenge. We train our network to predict the 3D offsets between radar re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 43 publications
0
5
0
Order By: Relevance
“…Camera can provide appearance information of targets, whereas radar can provide parameters such as distance and velocity of targets, and is less affected by extreme weather conditions. RADIANT [2] is a 3D object detection method based on the FCOS3D framework, which addresses the problem of radarcamera association. It can predict the 3D offset between radar data and the center point of the real object.…”
Section: Methods Based On the Fusion Of Radar And Cameramentioning
confidence: 99%
See 2 more Smart Citations
“…Camera can provide appearance information of targets, whereas radar can provide parameters such as distance and velocity of targets, and is less affected by extreme weather conditions. RADIANT [2] is a 3D object detection method based on the FCOS3D framework, which addresses the problem of radarcamera association. It can predict the 3D offset between radar data and the center point of the real object.…”
Section: Methods Based On the Fusion Of Radar And Cameramentioning
confidence: 99%
“…Moreover, 2D object detection does not involve orientation-related parameters [1]. Although 2D object detection is well-established, the literature [2,3] has demonstrated that methods achieving excellent results in 2D object detection tasks can experience a drastic drop in performance in 3D object detection. The realworld scenarios for autonomous driving technology involve a 3D environment; thus, 3D object detection is a topic worth exploring in depth.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The camera-radar early fusion models in the literature primarily use either a perspective view [89], [91] or a BEV [92], [93], depending on their application. Some models, however, use non-standard joint views that offer a balanced compromise in view disparity for both sensors, such as Cram-Net [28].…”
Section: ) Camera-radar Early Fusionmentioning
confidence: 99%
“…RADIANT [91] addresses the issue of imprecise association of radar points with camera object detections, which can lead to sub-optimal depth estimations. Instead of associating radar points to objects as other models do, RADIANT learns to predict 3D offsets between radar points and object centers, followed by a feedforward depth weighing network to refine their monocular estimated depth.…”
Section: ) Camera-radar Early Fusionmentioning
confidence: 99%