2014
DOI: 10.1007/978-3-319-11752-2_5
|View full text |Cite
|
Sign up to set email alerts
|

Submap-Based Bundle Adjustment for 3D Reconstruction from RGB-D Data

Abstract: Abstract. The key contribution of this paper is a novel submapping technique for RGB-D-based bundle adjustment. Our approach significantly speeds up 3D object reconstruction with respect to full bundle adjustment while generating visually compelling 3D models of high metric accuracy. While submapping has been explored previously for mono and stereo cameras, we are the first to transfer and adapt this concept to RGB-D sensors and to provide a detailed analysis of the resulting gain. In our approach, we partitio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 38 publications
(20 citation statements)
references
References 28 publications
0
20
0
Order By: Relevance
“…In the continuous process of movement, position and attitude of each frame image are calculated by the corresponding image sequence, but due to the presence of measurement error, tracking error [7][8][9][10] and other factors, will inevitably produce errors in calculation, and with the accumulation of time and distance, the prediction error will accumulate rapidly. In order to eliminate or slow down the accumulation of this error, we must take into account the error of the original observation information of the camera, and its mathematical structure should match the imaging characteristics of the camera as much as possible [11][12][13][14] . The RGB-D camera positioning model [15] , is constructed by the beam adjustment model.…”
Section: Construction Of Location Modelmentioning
confidence: 99%
“…In the continuous process of movement, position and attitude of each frame image are calculated by the corresponding image sequence, but due to the presence of measurement error, tracking error [7][8][9][10] and other factors, will inevitably produce errors in calculation, and with the accumulation of time and distance, the prediction error will accumulate rapidly. In order to eliminate or slow down the accumulation of this error, we must take into account the error of the original observation information of the camera, and its mathematical structure should match the imaging characteristics of the camera as much as possible [11][12][13][14] . The RGB-D camera positioning model [15] , is constructed by the beam adjustment model.…”
Section: Construction Of Location Modelmentioning
confidence: 99%
“…However, the frame-to-model camera tracking of the frameworks above is only of limited use for reconstructing larger scenes. To reduce drift explicitly, recent approaches [1,13,18,22] rely on loop closure detection in combination with global pose optimization. In order to efficiently estimate camera poses in real-time, DVO-SLAM by Kerl et al [10] minimizes a photometric and geometric error to accurately align RGB-D frames.…”
Section: Related Workmentioning
confidence: 99%
“…The PUT SLAM closes loops locally by frequent matching of the incoming RGB-D frames to the map, without identifying explicitly the already seen places. This approach is similar to the Bundle Adjustment (BA) method used to ef iciently solve the SfM problem [36] and applied recently to real-time visual SLAM [22] and RGB-D-based reconstruction [21]. However, an important difference between the typical BA algorithm and the approach taken in PUT SLAM is that in PUT SLAM the Euclidean errors in the positions of features are minimized, whereas in vision-only BA the re-projection error of features onto images is minimized.…”
Section: Put Slammentioning
confidence: 99%
“…These benchmarks allow for comparison of new architectures to the solutions already known from the literature [4,21,22,37]. This kind of evaluation, however, usually involves RGB-D data sequences acquired by handheld sensors (Kinect or Xtion) in relatively con ined spaces [35] or simulated RGB-D images [14].…”
Section: Introduc Onmentioning
confidence: 99%