2019
DOI: 10.1007/978-3-030-36711-4_20
|View full text |Cite
|
Sign up to set email alerts
|

Direct Image to Point Cloud Descriptors Matching for 6-DOF Camera Localization in Dense 3D Point Clouds

Abstract: We propose a novel concept to directly match feature descriptors extracted from RGB images, with feature descriptors extracted from 3D point clouds. We use this concept to localize the position and orientation (pose) of the camera of a query image in dense point clouds. We generate a dataset of matching 2D and 3D descriptors, and use it to train a proposed Descriptor-Matcher algorithm. To localize a query image in a point cloud, we extract 2D keypoints and descriptors from the query image. Then the Descriptor-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…Our preliminary work [Nadeem et al, 2019] was the first technique to estimate the 6-DOF pose for query cameras by directly matching features extracted from 2D images and 3D point clouds. Feng et al [2019] trained a deep convolutional network with triplet loss to estimate descriptors for patches extracted from images and point clouds.…”
Section: Direct 2d-3d Descriptor Matching Based Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Our preliminary work [Nadeem et al, 2019] was the first technique to estimate the 6-DOF pose for query cameras by directly matching features extracted from 2D images and 3D point clouds. Feng et al [2019] trained a deep convolutional network with triplet loss to estimate descriptors for patches extracted from images and point clouds.…”
Section: Direct 2d-3d Descriptor Matching Based Methodsmentioning
confidence: 99%
“…A preliminary version of this work appeared in Nadeem et al [2019]. To the best of our knowledge, Nadeem et al [2019] was the first work: (i) to match directly 3D descriptors extracted from dense point clouds with the 2D descriptors from RGB images, and (ii) to use direct matching of 2D and 3D descriptors to localize camera pose with 6-DOF in dense 3D point clouds. This work extends Nadeem et al [2019] by improving all the elements of the proposed technique, including dataset generation, Descriptor-Matcher and pose estimation.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Since it is hard to find more recent works of omnidirectional image direct alignment with a 3D model of an environment, [14], [15] are considered as baselines. Despite the variety of contributions in the field of neural networks, only conventional or rectified images are considered as input of pose detection [16] approaches. One could generate conventional images from omnidirectional ones [17] to feed the latter methods but, in this paper, we focus on using acquired images directly without geometric pre-transformation.…”
Section: Introductionmentioning
confidence: 99%