2022
DOI: 10.1109/jsen.2022.3154980
|View full text |Cite
|
Sign up to set email alerts
|

Camera, LiDAR, and Radar Sensor Fusion Based on Bayesian Neural Network (CLR-BNN)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(6 citation statements)
references
References 39 publications
0
6
0
Order By: Relevance
“…Radar-Lidar Fusion [46], integrating radar and Lidar data, can enhance object detection and tracking capabilities, especially in scenarios where accurate distance estimation is critical, while Vision-Lidar Fusion [47], combining vision and Lidar data, can provide detailed information about the shape and characteristics of objects, improving the overall understanding of the scene. Therefore, implementing sensor fusion algorithms involves sophisticated data processing and fusion techniques, such as the Bayesian Neural Network, which combine camera, lidar, and radar sensors [48], or deep learning approaches. These techniques aim to merge information from different sensors while considering their individual strengths and weaknesses, resulting in a more reliable and robust perception system for ACC and other ADAS applications.…”
Section: Vision-based Accmentioning
confidence: 99%
“…Radar-Lidar Fusion [46], integrating radar and Lidar data, can enhance object detection and tracking capabilities, especially in scenarios where accurate distance estimation is critical, while Vision-Lidar Fusion [47], combining vision and Lidar data, can provide detailed information about the shape and characteristics of objects, improving the overall understanding of the scene. Therefore, implementing sensor fusion algorithms involves sophisticated data processing and fusion techniques, such as the Bayesian Neural Network, which combine camera, lidar, and radar sensors [48], or deep learning approaches. These techniques aim to merge information from different sensors while considering their individual strengths and weaknesses, resulting in a more reliable and robust perception system for ACC and other ADAS applications.…”
Section: Vision-based Accmentioning
confidence: 99%
“…External perception covers information on static and dynamic objects on the street, traffic, and street signs, and is obtained using sensors such as cameras, radars, and LiDARs. External perception provides a real-time picture of the dynamic environment around the vehicle, either by advanced processing of data from a sensor modality [3], [4], [5] or by fusing data from multiple modalities [6], [7]. Another aspect of external perception is localization to determine the location of the vehicle.…”
Section: Generic Adas/ads Architecturementioning
confidence: 99%
“…LiDAR is a representative vision sensor used in conjunction with cameras and radars. These three sensors have dif-ferent advantages and disadvantages depending on their operating principles and characteristics [18]- [21]. These sensors are used together to complement each other's shortcomings, but they continue to develop to maximize their advantages and complement their shortcomings.…”
Section: Introductionmentioning
confidence: 99%