2020 4th International Conference on Robotics and Automation Sciences (ICRAS) 2020
DOI: 10.1109/icras49812.2020.9135065
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modality Cascaded Fusion Technology for Autonomous Driving

Abstract: Multi-modality fusion is the guarantee of the stability of autonomous driving systems. In this paper, we propose a general multi-modality cascaded fusion framework, exploiting the advantages of decision-level and feature-level fusion, utilizing target position, size, velocity, appearance and confidence to achieve accurate fusion results. In the fusion process, dynamic coordinate alignment(DCA) is conducted to reduce the error between sensors from different modalities. In addition, the calculation of affinity m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…However, VoxelFusion is better than PointFusion in memory usage. Kuang et al [158] put forth a multi-modality cascaded fusion model for autonomous driving. The fusion model is composed of two parts: intra-frame and inter-frame fusion.…”
Section: Other Fusion Techniquesmentioning
confidence: 99%
“…However, VoxelFusion is better than PointFusion in memory usage. Kuang et al [158] put forth a multi-modality cascaded fusion model for autonomous driving. The fusion model is composed of two parts: intra-frame and inter-frame fusion.…”
Section: Other Fusion Techniquesmentioning
confidence: 99%
“…It bolsters functionalities like adaptive cruise control, collision prevention, and lane maintenance assistance, thereby elevating driving safety and efficiency. The amalgamated data facilitates enhanced predictive accuracy and adaptability to evolving road scenarios [18]. However, the implementation of sensor fusion is not without its challenges.…”
Section: Multimodal Fusionmentioning
confidence: 99%
“…Recent works also look at fusion between Radar and cameras within the perception system. Different Radar representations are proposed to facilitate fusion: spectrogram images [22], sparse locations in image space [29], pseudo-image by projecting to image space [30,4], BEV representation [28] and object detections [16]. However, these methods do not have high accuracy in terms of 3D perception.…”
Section: Related Workmentioning
confidence: 99%