Robotics: Science and Systems XVI 2020
DOI: 10.15607/rss.2020.xvi.009
|View full text |Cite
|
Sign up to set email alerts
|

OverlapNet: Loop Closing for LiDAR-based SLAM

Abstract: Simultaneous localization and mapping (SLAM) is a fundamental capability required by most autonomous systems. In this paper, we address the problem of loop closing for SLAM based on 3D laser scans recorded by autonomous cars. Our approach utilizes a deep neural network exploiting different cues generated from LiDAR data for finding loop closures. It estimates an image overlap generalized to range images and provides a relative yaw angle estimate between pairs of scans. Based on such predictions, we tackle loop… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
127
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 188 publications
(127 citation statements)
references
References 35 publications
0
127
0
Order By: Relevance
“…Closed-loop detection is an important part of the SLAM system and plays a very important role in eliminating accumulated errors. There are many references to the closed-loop detection, for example [ 27 , 28 ]. Considering the detection speed, this paper adopted the fast closed-loop detection method mentioned in the literature [ 19 ].…”
Section: Methodsmentioning
confidence: 99%
“…Closed-loop detection is an important part of the SLAM system and plays a very important role in eliminating accumulated errors. There are many references to the closed-loop detection, for example [ 27 , 28 ]. Considering the detection speed, this paper adopted the fast closed-loop detection method mentioned in the literature [ 19 ].…”
Section: Methodsmentioning
confidence: 99%
“…The work in Reference [34] is related to ours in the sense that it can also be seen as a hierarchical approach to localisation, in the particular case of LiDAR. We see two major distinctions between Reference [34] and our work.…”
Section: Hierarchical Localisationmentioning
confidence: 99%
“…The work in Reference [34] is related to ours in the sense that it can also be seen as a hierarchical approach to localisation, in the particular case of LiDAR. We see two major distinctions between Reference [34] and our work. Firstly, to achieve rotational invariance we account for the cylindrical nature of the scan formation process in our architecture, while Reference [34] maintains the rotation information and exploits it to extract the yaw displacement from the comparison of the embeddings of two scans.…”
Section: Hierarchical Localisationmentioning
confidence: 99%
See 1 more Smart Citation
“…Most semantic SLAM methods fuse semantic labels obtained from semantic segmentation and maps generated by the SLAM algorithm to generate 3D maps with semantic information. According to the type of sensor they used, semantic SLAM algorithms can be classified as the monocular camerabased [17]- [19], stereo camera-based [20], [21], LiDARbased [22]- [24], multiple sensors-based [25], [26], RGB-D camera-based approaches, and so on. This paper mainly considers semantic SLAM algorithms based on the RGB-D camera.…”
Section: Semantic Slammentioning
confidence: 99%