2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00362
|View full text |Cite
|
Sign up to set email alerts
|

Deep Closest Point: Learning Representations for Point Cloud Registration

Abstract: Point cloud registration is a key problem for computer vision applied to robotics, medical imaging, and other applications. This problem involves finding a rigid transformation from one point cloud into another so that they align. Iterative Closest Point (ICP) and its variants provide simple and easily-implemented iterative methods for this task, but these algorithms can converge to spurious local optima. To address local optima and other difficulties in the ICP pipeline, we propose a learning-based method, ti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
749
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 832 publications
(750 citation statements)
references
References 39 publications
1
749
0
Order By: Relevance
“…The energies we minimize are very local and cannot establish correspondences between points far apart. Fortunately, it appears that modern methods for rigid alignment [24,32] make this assumption reasonably easy to justify. Our second assumption, in case of the GLP based energy, is that we have triangle mesh connectivities for both the combined point cloud and for each sub-scan.…”
Section: Discussionmentioning
confidence: 99%
“…The energies we minimize are very local and cannot establish correspondences between points far apart. Fortunately, it appears that modern methods for rigid alignment [24,32] make this assumption reasonably easy to justify. Our second assumption, in case of the GLP based energy, is that we have triangle mesh connectivities for both the combined point cloud and for each sub-scan.…”
Section: Discussionmentioning
confidence: 99%
“…By embedding a differentiable point-based pose estimator [9] learns to predict keypoint locations for the task of rotation prediction; however the formulation predicts only object category specific keypoints which cannot generalise to new scenes. Conversely [10] registers two point clouds by predicting point-wise descriptors for matching, followed by the same pose estimation formulation.…”
Section: Related Workmentioning
confidence: 99%
“…Inspired by [9], [10] we learn to predict keypoints specialised for localisation by embedding a pose estimator in our architecture and use only pose information as supervision. This avoids imposing any assumptions on what makes suitable keypoints and enables pose prediction at any angle, a limitation of the current state-of-the-art [13].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The pipeline is decomposed into three main components: 'Learning Shape Descriptor' with a MultiLayer Perceptron (MLP) that learns descriptors from the input source and target point sets; 'Coherent PointMorph' that is a three MLPs block fed with the two descriptors concatenated with the source data points; and the 'Point Set Alignment', where the loss function is defined to determine the quality of the alignment. Deep Closest Point [70] registers two point clouds by first embedding them into high-dimensional space using DGCNN [101] to extract features. After that, contextual information is estimated using an attention-based module that provides a dependency term between the feature sets, i.e., one set is modified in a way that is knowledgeable about the structure of the other.…”
Section: Transformation Levelmentioning
confidence: 99%