2020
DOI: 10.1007/978-3-030-58583-9_43
|View full text |Cite
|
Sign up to set email alerts
|

3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-view Spatial Feature Fusion for 3D Object Detection

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
150
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 335 publications
(150 citation statements)
references
References 25 publications
0
150
0
Order By: Relevance
“…Therefore, all information is retained and can potentially improve the obstacle detection accuracy. Reference [ 181 ] proposed a two-stage 3D obstacle detection architecture, named 3D-cross view fusion (3D-CVF). In the second stage, they utilized the LLF approach to fuse the joint camera-LiDAR feature map obtained from the first stage with the low-level camera and LiDAR features using a 3D region of interest (RoI)-based pooling method.…”
Section: Sensor Calibration and Sensor Fusion For Object Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, all information is retained and can potentially improve the obstacle detection accuracy. Reference [ 181 ] proposed a two-stage 3D obstacle detection architecture, named 3D-cross view fusion (3D-CVF). In the second stage, they utilized the LLF approach to fuse the joint camera-LiDAR feature map obtained from the first stage with the low-level camera and LiDAR features using a 3D region of interest (RoI)-based pooling method.…”
Section: Sensor Calibration and Sensor Fusion For Object Detectionmentioning
confidence: 99%
“…In the second stage, they utilized the LLF approach to fuse the joint camera-LiDAR feature map obtained from the first stage with the low-level camera and LiDAR features using a 3D region of interest (RoI)-based pooling method. They evaluated the proposed method on KITTI and nuScenes datasets and reported that the object detection results outperformed the state-of-the-art 3D object detectors in the KITTI leaderboard (see reference [ 181 ] for a more comprehensive summary). In practice, the LLF approach comes with a multitude of challenges, not least in its implementation.…”
Section: Sensor Calibration and Sensor Fusion For Object Detectionmentioning
confidence: 99%
“…Therefore, all information is retained and can potentially improve the obstacle detection accuracy. Reference [178] proposed a two-stage 3D obstacle detection architecture, named 3D-cross view fusion (3D-CVF). In the second stage, they utilized the LLF approach to fuse the joint camera-LiDAR feature map obtained from the first stage with the low-level camera and LiDAR features using a 3D region of interest (RoI)-based pooling method.…”
Section: Sensor Fusion Approachesmentioning
confidence: 99%
“…The LIDAR-RGB 3D fusion object detection algorithm mainly includes MV3D [ 7 ], AVOD [ 3 ], 3D-CVF [ 9 ], MMF [ 10 ], etc., which are more robust in practical applications. MV3D is the first pioneer in using the fusion of LIDAR point cloud data and RGB image information.…”
Section: Related Workmentioning
confidence: 99%
“…However, the fusion in the decision-making layer has little effect on the raw data information fusion, and the confidence scores of the proposals generated by the two modules are not related. For the deep fusion-based methods, 3D-CVF [ 9 ] and MMF [ 10 ] adopt feature extractors respectively for LIDAR and image, and fuse images and LIDAR hierarchically and semantically. Finally, the semantic fusion of multi-scale information is realized.…”
Section: Introductionmentioning
confidence: 99%