2023
DOI: 10.3390/rs15071839
|View full text |Cite
|
Sign up to set email alerts
|

FusionRCNN: LiDAR-Camera Fusion for Two-Stage 3D Object Detection

Abstract: Accurate and reliable perception systems are essential for autonomous driving and robotics. To achieve this, 3D object detection with multi-sensors is necessary. Existing 3D detectors have significantly improved accuracy by adopting a two-stage paradigm that relies solely on LiDAR point clouds for 3D proposal refinement. However, the sparsity of point clouds, particularly for faraway points, makes it difficult for the LiDAR-only refinement module to recognize and locate objects accurately. To address this issu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 69 publications
0
11
0
Order By: Relevance
“…The development of object detection algorithms can be divided into two stages: traditional object detection algorithms and deep-learning-based object detection algorithms. Deep-learning-based object detection algorithms are further divided into two main technical routes: one-stage and two-stage algorithms [49]. Figure 1 shows the development of object detection from 2001 to 2023.…”
Section: Object Detection Development Processmentioning
confidence: 99%
“…The development of object detection algorithms can be divided into two stages: traditional object detection algorithms and deep-learning-based object detection algorithms. Deep-learning-based object detection algorithms are further divided into two main technical routes: one-stage and two-stage algorithms [49]. Figure 1 shows the development of object detection from 2001 to 2023.…”
Section: Object Detection Development Processmentioning
confidence: 99%
“…Zhong et al 17 briefly reviewed the methods of fusion and enhancement for LiDAR and camera sensors in the fields of depth completion, semantic segmentation, object detection, and object tracking. Xu et al 18 proposed a novel two-stage approach named FusionRCNN. It fused sparse geometry information from LiDAR with dense texture information from the camera in the Regions of Interest (RoI).…”
Section: Related Workmentioning
confidence: 99%
“…CLOCs [35] use the geometric and semantic consistency of objects to promote each other and to reduce the occurrence of both false and missed detection. FusionRCNN [36] designs a second-stage transformer mechanism to achieve fusion between image features and 3D point features.…”
Section: Lidar-camera-based 3d Detectormentioning
confidence: 99%
“…To validate the performance of our DASANet, the fifteen networks are used for comparison on the KITTI validation set, namely SECOND [12], PointPillars [13], PointRCNN [19], SA-SSD [14], PV-RCNN [24], Voxel-RCNN [16], Pyramid-RCNN [17], MV3D [34], Point-Painting [32], F-PointNet [29], Focals Conv [40], CLOCs [29], VFF [41], FusionRCNN [36], and SFD [42].…”
Section: Comparison Experimentsmentioning
confidence: 99%
See 1 more Smart Citation