2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) 2020
DOI: 10.1109/itsc45102.2020.9294279
|View full text |Cite
|
Sign up to set email alerts
|

DeepCLR: Correspondence-Less Architecture for Deep End-to-End Point Cloud Registration

Abstract: This work addresses the problem of point cloud registration using deep neural networks. We propose an approach to predict the alignment between two point clouds with overlapping data content, but displaced origins. Such point clouds originate, for example, from consecutive measurements of a LiDAR mounted on a moving platform. The main difficulty in deep registration of raw point clouds is the fusion of template and source point cloud. Our proposed architecture applies flow embedding to tackle this problem, whi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(15 citation statements)
references
References 20 publications
0
15
0
Order By: Relevance
“…The evaluation resulted in Et = 5.4% and Er = 0.0154 deg/m and is currently ranked 121/134. This is remarkable as our model consists of only 3.56% of the number of parameters of DeepCLR [1] which is ranked 118/134 with Et = 4.19% and Er = 0.0087 deg/m (numbers are as of January 25, 2021, the date of submission of this work).…”
Section: Resultsmentioning
confidence: 98%
“…The evaluation resulted in Et = 5.4% and Er = 0.0154 deg/m and is currently ranked 121/134. This is remarkable as our model consists of only 3.56% of the number of parameters of DeepCLR [1] which is ranked 118/134 with Et = 4.19% and Er = 0.0087 deg/m (numbers are as of January 25, 2021, the date of submission of this work).…”
Section: Resultsmentioning
confidence: 98%
“…Based on the findings of previous work, which we will summarize in section 2, the main part of this work is dedicated to the introduction of our proposed model in section 3. It is similar to [1] and performs comparably while using only about 3.56% the number of trainable parameters thereof. In sections 4 and 5 the training and evaluation of the model are introduced.…”
Section: Introductionmentioning
confidence: 87%
“…Unfortunately, in pure LO no initial guess of the target transformation is given. This can lead to large inaccuracies when applying ICP at high velocities of the moving platform due to the large relative displacement of subsequent point clouds [1]. In recent years LOAM [13] has been considered as a state-of-the-art approach for LO and is also ranked as the best LO-method on the KITTI Vision Benchmark Suite.…”
Section: Related Workmentioning
confidence: 99%
“…This makes them suitable for many sensor types such as cameras, lidars, radars, or IMUs. Although per-sensor ego-motion is not directly available from most sensors, algorithms used for registration and mapping are effective tools to estimate it [9]- [11]. Since we want to offer a universal calibration tool that can be deployed on many different sensor setups, our work is focused on motion-based extrinsic calibration.…”
Section: Globalmentioning
confidence: 99%
“…For estimating the required per-sensor ego-motion, registration algorithms [4], [11] or online capable SLAM algorithms like LOAM for lidars [9] or OpenVSLAM for cameras [10] can be used. Additionally, SLAM algorithms are often directly used to estimate the sensor calibration by registering the individual maps of multiple per-sensor SLAM algorithms [14]- [16].…”
Section: Related Workmentioning
confidence: 99%