2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9196859
|View full text |Cite
|
Sign up to set email alerts
|

Global visual localization in LiDAR-maps through shared 2D-3D embedding space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 41 publications
(15 citation statements)
references
References 33 publications
0
15
0
Order By: Relevance
“…While for place recognition, there are few methods performed on heterogeneous measurements. Cattaneo et al (2020) built shared embedding space for visual and lidar, thus achieving global visual localization on lidar maps via place recognition. Some researchers proposed to conduct the fusion of image and lidar points for place recognition (Xie et al, 2020).…”
Section: Multi-modal Measurements For Robotic Perceptionmentioning
confidence: 99%
“…While for place recognition, there are few methods performed on heterogeneous measurements. Cattaneo et al (2020) built shared embedding space for visual and lidar, thus achieving global visual localization on lidar maps via place recognition. Some researchers proposed to conduct the fusion of image and lidar points for place recognition (Xie et al, 2020).…”
Section: Multi-modal Measurements For Robotic Perceptionmentioning
confidence: 99%
“…The central insight besides the Teacher/Student paradigm is to learn a shared embedding space between the 2D images created with the intersection model and those generated by the aforementioned transformation pipelines. This approach is inspired by the work of Cattaneo et al [ 55 ], which performs visual localization using 2D and 3D inputs in a bi-directional mode, teaching two networks to create a shared embedding space and thus enabling a two-way localization, starting either from 2D or 3D inputs. Recalling the metric technique described in the first part of Section 4.2 , the teacher-student paradigm introduced some minor changes, in particular to Equation ( 1 ).…”
Section: Technical Approachmentioning
confidence: 99%
“…In order to compare the images generated from the intersection model and those transformed from the RGB images, we propose a teacher/student paradigm aimed to learn a shared-embedding space between the two domains. The approach proposed in this work is inspired by the works of Cattaneo [22], which performs visual localization using 2D and 3D inputs in a bi-directional mode, teaching two networks to create a shared embedding space. In a similar way, we conceive the classification problem as a metric-learning task where, given two instances of the same intersections class but in different domains, e.g., Class 0 and Domains D1 and D2 (D 1 C=0 and D 2 C=0 ), and two different non-linear functions f (•) and g(•) represented in form of DNNs, the distance between the embeddings is lower than any other negative intersection instance, e.g., D 2 c=2 .…”
Section: Teacher/student Trainingmentioning
confidence: 99%