2020
DOI: 10.3390/s20102870
|View full text |Cite
|
Sign up to set email alerts
|

Large-Scale Place Recognition Based on Camera-LiDAR Fused Descriptor

Abstract: In the field of autonomous driving, carriers are equipped with a variety of sensors, including cameras and LiDARs. However, the camera suffers from problems of illumination and occlusion, and the LiDAR encounters motion distortion, degenerate environment and limited ranging distance. Therefore, fusing the information from these two sensors deserves to be explored. In this paper, we propose a fusion network which robustly captures both the image and point cloud descriptors to solve the place recognition problem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(17 citation statements)
references
References 52 publications
0
17
0
Order By: Relevance
“…Cattaneo et al (2020) built shared embedding space for visual and lidar, thus achieving global visual localization on lidar maps via place recognition. Some researchers proposed to conduct the fusion of image and lidar points for place recognition (Xie et al, 2020). Similarly, in Pan et al (2020), the authors first built local dense lidar maps from raw lidar scans, and then proposed a compound network to align the feature embeddings of image and lidar…”
Section: Multi-modal Measurements For Robotic Perceptionmentioning
confidence: 99%
“…Cattaneo et al (2020) built shared embedding space for visual and lidar, thus achieving global visual localization on lidar maps via place recognition. Some researchers proposed to conduct the fusion of image and lidar points for place recognition (Xie et al, 2020). Similarly, in Pan et al (2020), the authors first built local dense lidar maps from raw lidar scans, and then proposed a compound network to align the feature embeddings of image and lidar…”
Section: Multi-modal Measurements For Robotic Perceptionmentioning
confidence: 99%
“…Xie et al [88] presented the camera-LiDAR sensors fusion method, which robustly captures data from both sensors to solve the 3D place recognition problem. It introduced a trimmed clustering approach in 3D PC to reduce unrepresentative information for better recognition.…”
Section: Lidar-camera Fusion-based 3dprmentioning
confidence: 99%
“…The KAIST dataset [178] was proposed by [179] to provide LiDAR and stereo images of complex urban scenes. One [88] among the reviewed studies used the KAIST dataset to perform 3DPR tasks. NYUD2 is a kinect dataset [180] that was used by one 3DPR study [78] in this survey.…”
Section: Datasetsmentioning
confidence: 99%
“…Deep distance learning is of great significance in learning visual similarity. Recently, a specially designed triplet loss combined with CNN feature extraction has achieved good performance in face recognition [33], person re-identification [34,35], camera-LiDAR place recognition [36] and radar place recognition [37][38][39] tasks. The main concept behind the triplet loss is to minimize the distances of the same category images and maximize those of other categories in the Euclidean space.…”
Section: Introductionmentioning
confidence: 99%