2022
DOI: 10.1177/17298806221078669
|View full text |Cite
|
Sign up to set email alerts
|

Self-supervised learning of LiDAR odometry based on spherical projection

Abstract: Recently, the learning-based LiDAR odometry has obtained robust estimation results in the field of mobile robot localization, but most of them are constructed based on the idea of supervised learning. In the network training stage, these supervised learning-based methods rely heavily on real pose labels, which is defective in practical applications. Different from these methods, a novel self-supervised LiDAR odometry, namely SSLO, is proposed in this article. The proposed SSLO only uses unlabeled point cloud d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…PWCLO-Net [25] and SVDLO [46] attempt to process the raw 3D point clouds by PointNet-based models [47]. For the training loss under selfsupervision, ICP-like [48][49][50], point-to-point matching loss [51], point-to-plane matching loss [18,20], and plane-to-plane matching loss [20] are extensively used.…”
Section: Lidar Odometrymentioning
confidence: 99%
“…PWCLO-Net [25] and SVDLO [46] attempt to process the raw 3D point clouds by PointNet-based models [47]. For the training loss under selfsupervision, ICP-like [48][49][50], point-to-point matching loss [51], point-to-plane matching loss [18,20], and plane-to-plane matching loss [20] are extensively used.…”
Section: Lidar Odometrymentioning
confidence: 99%
“…The odometry system requires a rotary encoder sensor to detect the number of wheel rotations [28] [29]. Area mapping using the odometry method is to estimate changes in the robot's position over time in a Cartesian diagram [30] [31]. The result is data on the coordinates and direction of the robot [32][33] [34].…”
Section: Introductionmentioning
confidence: 99%