2024
DOI: 10.3390/fi16040114
|View full text |Cite
|
Sign up to set email alerts
|

NeXtFusion: Attention-Based Camera-Radar Fusion Network for Improved Three-Dimensional Object Detection and Tracking

Priyank Kalgaonkar,
Mohamed El-Sharkawy

Abstract: Accurate perception is crucial for autonomous vehicles (AVs) to navigate safely, especially in adverse weather and lighting conditions where single-sensor networks (e.g., cameras or radar) struggle with reduced maneuverability and unrecognizable targets. Deep camera–radar fusion neural networks offer a promising solution for reliable AV perception under any weather and lighting conditions. Cameras provide rich semantic information, while radars act like an X-ray vision, piercing through fog and darkness. This … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 42 publications
0
1
0
Order By: Relevance
“…While a camera as a sensor input provides detailed texture and semantic information, its performance is degraded by small objects at long range, occlusion, and poor lighting conditions, radar as a sensor input has the ability to provide reliable performance in all weather and lighting conditions, detect small objects at long range, and operate unhindered by occlusion problems. [5,6,7]. This framework appropriately rethinks the generation of an image not as the creative effort of the artist in front of the canvas, but as a construction of data points compiled into lines of code that accurately reflect the inner workings of these neural networks and artificial intelligence [6].…”
Section: Introductionmentioning
confidence: 99%
“…While a camera as a sensor input provides detailed texture and semantic information, its performance is degraded by small objects at long range, occlusion, and poor lighting conditions, radar as a sensor input has the ability to provide reliable performance in all weather and lighting conditions, detect small objects at long range, and operate unhindered by occlusion problems. [5,6,7]. This framework appropriately rethinks the generation of an image not as the creative effort of the artist in front of the canvas, but as a construction of data points compiled into lines of code that accurately reflect the inner workings of these neural networks and artificial intelligence [6].…”
Section: Introductionmentioning
confidence: 99%