2017
DOI: 10.1007/978-3-319-54193-8_1
|View full text |Cite
|
Sign up to set email alerts
|

Divide and Conquer: Efficient Density-Based Tracking of 3D Sensors in Manhattan Worlds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(39 citation statements)
references
References 30 publications
0
39
0
Order By: Relevance
“…Vanishing point [16,21] and planar structure [18,19,32,44] are two kinds of frequently-used visual cues. [18,32,44] decouple the rotation and translation to estimate orientation by tracking Manhattan frames. [19] extends to compute translational motion in VO system by minimizing de-rotated reprojection error given the rotation.…”
Section: Visual Odometry Based On Decoupled Pose Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…Vanishing point [16,21] and planar structure [18,19,32,44] are two kinds of frequently-used visual cues. [18,32,44] decouple the rotation and translation to estimate orientation by tracking Manhattan frames. [19] extends to compute translational motion in VO system by minimizing de-rotated reprojection error given the rotation.…”
Section: Visual Odometry Based On Decoupled Pose Estimationmentioning
confidence: 99%
“…Images in this dataset meet the Manhattan World assumption. The dataset is widely used for VO/SLAM [18,19,44] and 3D reconstruction [5]. ICL NUIM dataset is synthesized by a full 6DoF handheld camera and thus is challenging for monocular VO methods due to complicated motion patterns.…”
Section: Datasetmentioning
confidence: 99%
“…From MW surface normal vectors, [25] estimates rotational motion based on the maximum a posteriori (MAP) inference of the local Manhattan frame in real-time on a GPU. [30] decouples rotation and translation to estimate absolute orientation by tracking the Manhattan frame (MF) with a mean shift algorithm. However, this method suffers from a translation error that increases rapidly over time, as the translational motion is computed by aligning 1D density distribution of the point cloud.…”
Section: Related Workmentioning
confidence: 99%
“…where λ is a weighting factor of how certain the observation of a direction is [30]. The above procedure (lines 2 to 7 of Algorithm 1) is repeated until the change in the estimated rotation of MF is very small.…”
Section: Tracking Manhattan Framementioning
confidence: 99%
“…Besides that, unlike [7,8], we also derive a closed form for the translation and analyze the limitations and what is the expected performance of the approach in a set of scene configurations. Some other interesting works assume further hypothesis in the scene geometry, as the Manhattan World assumption in [14] for scene reconstruction and in [15] for depth registration using principal component analysis of the normal vectors.…”
Section: Main Related Workmentioning
confidence: 99%