2019 19th International Conference on Advanced Robotics (ICAR) 2019
DOI: 10.1109/icar46387.2019.8981603
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Unobserved Space for Planning via Depth Map Augmentation

Abstract: Safe and efficient path planning is crucial for autonomous mobile robots. A prerequisite for path planning is to have a comprehensive understanding of the 3D structure of the robot's environment. On Micro Air Vehicles (MAVs) this is commonly achieved using low-cost sensors, such as stereo or RGB-D cameras. These sensors may fail to provide depth measurements in textureless or IR-absorbing areas and have limited effective range. In path planning, this results in inefficient trajectories or failure to recognize … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 36 publications
0
3
0
Order By: Relevance
“…Recent related work, however, addresses the problem of scene completion and occupancy anticipation from a DL perspective. Fehr et al [231] use a neural network to augment the measurements of a depth sensor and Ramakrishnan et al [232] directly predict augmented OG maps beyond the sensor's field-of-view using auto-encoders (AE). Rather than using raw sensor measurements, Katyal et al [233] and Hayoun et al [234] extend an input OG map beyond the lineof-sight also using AE.…”
Section: A Prediction Beyond Line-of-sightmentioning
confidence: 99%
“…Recent related work, however, addresses the problem of scene completion and occupancy anticipation from a DL perspective. Fehr et al [231] use a neural network to augment the measurements of a depth sensor and Ramakrishnan et al [232] directly predict augmented OG maps beyond the sensor's field-of-view using auto-encoders (AE). Rather than using raw sensor measurements, Katyal et al [233] and Hayoun et al [234] extend an input OG map beyond the lineof-sight also using AE.…”
Section: A Prediction Beyond Line-of-sightmentioning
confidence: 99%
“…Recent related work, however, addresses the problem of scene completion and occupancy anticipation from a DL perspective. Fehr et al [221] use a neural network to augment the measurements of a depth sensor and Ramakrishnan et al [222] directly predict augmented OG maps beyond the sensor's field-of-view using auto-encoders (AE). Rather than using raw sensor measurements, Katyal et al [223] and Hayoun et al [224] extend an input OG map beyond the lineof-sight also using AE.…”
Section: A Prediction Beyond Line-of-sightmentioning
confidence: 99%
“…While many works aim to address the problem of depth completion on a per-image basis, only a few investigate the use of the dense depth output for applications such as reconstruction and navigation. Fehr et al [14] proposed a planning system that predicts dense depth images using a CNN [2] and uses them as the input for the Voxblox mapping system [15]. Depth estimation has been used in visual SLAM systems to recover metric scale [16,17], and to complete sparse visual features [18,19].…”
Section: B Depth Completion For 3d Reconstruction and Navigationmentioning
confidence: 99%