2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015
DOI: 10.1109/iros.2015.7354271
|View full text |Cite
|
Sign up to set email alerts
|

Mapping with depth panoramas

Abstract: This work demonstrates the use of depth panoramas in the construction of detailed 3D models of extended environments. The paper describes an approach to the acquisition of such panoramic images using a robotic platform that collects sequences of depth images with a commodity depth sensor. These sequences are stitched into panoramic images that efficiently capture scene information while reducing noise in the captured imagery. Scene structure is extracted from the panoramas and used to register a collection of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2
1

Relationship

3
6

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…LLOL [11] uses spinning lidar as a streaming sensor through lidar points division and avoids the problem of insufficient constraints on partial scans by guaranteeing a complete frame of data is used for registration through a circular buffer. Besides, LLOL organizes the map points in the form of depth panorama [15] to ensure the system's efficiency. Because it needs to estimate the pose by the amount of data in the complete frame when processing each partial scan, the system becomes less efficient as the number of partial scans increases.…”
Section: A Lidar-inertial Odometrymentioning
confidence: 99%
“…LLOL [11] uses spinning lidar as a streaming sensor through lidar points division and avoids the problem of insufficient constraints on partial scans by guaranteeing a complete frame of data is used for registration through a circular buffer. Besides, LLOL organizes the map points in the form of depth panorama [15] to ensure the system's efficiency. Because it needs to estimate the pose by the amount of data in the complete frame when processing each partial scan, the system becomes less efficient as the number of partial scans increases.…”
Section: A Lidar-inertial Odometrymentioning
confidence: 99%
“…The local maps support small-scale loop closures by virtue of powering local frame-to-model tracking in the spirit of KinectFusion [14], while the pose graph enables large-scale loop closures. The local map representation is a panoramic keyframe [15], [16] recording depth, surface normal, and confidence for each pixel, Fig. 7.…”
Section: Mappingmentioning
confidence: 99%
“…Fast-LIO2 achieves real-performance on embedded CPUs via the use of an incremental KD Tree and tightly-couples IMU data. Perhaps the most similar work to ours is UPSLAM [22], which uses a union of depth panoramas [23] as its map representation. It is also a full-fledged SLAM system that includes loop closure.…”
Section: B Lidar-inertial Odometrymentioning
confidence: 99%
“…Following UPSLAM [22], we employ a depth panorama (pano) [23] as our local map. A pano is similar to a sweep, but with higher resolution and represents the 3D structure at one point along the sensors trajectory.…”
Section: B Local Map Representationmentioning
confidence: 99%