2021
DOI: 10.48550/arxiv.2105.09371
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

VOILA: Visual-Observation-Only Imitation Learning for Autonomous Navigation

Abstract: While imitation learning for vision-based autonomous mobile robot navigation has recently received a great deal of attention in the research community, existing approaches typically require state-action demonstrations that were gathered using the deployment platform. However, what if one cannot easily outfit their platform to record these demonstration signals or-worse yet-the demonstrator does not have access to the platform at all? Is imitation learning for vision-based autonomous navigation even possible in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…At the same time, the whole simulator is highly modular and customizable, and new APIs and functionalities can be added on top of the existing simulator core. Additionally, the simulator allows human experts to control agents directly, supporting synthetic dataset creation and the development of imitation learning approaches [26], [27]. These objectives are obtained through the following implementation choices:…”
Section: Midgard Simulatormentioning
confidence: 99%
“…At the same time, the whole simulator is highly modular and customizable, and new APIs and functionalities can be added on top of the existing simulator core. Additionally, the simulator allows human experts to control agents directly, supporting synthetic dataset creation and the development of imitation learning approaches [26], [27]. These objectives are obtained through the following implementation choices:…”
Section: Midgard Simulatormentioning
confidence: 99%
“…Robotic Teleoperation for Mobile Manipulation: IL leverages human demonstrations to learn tasks such as stationary manipulation [10,11,12,13] and navigation [14,15,16,17]. Having human operators remotely control an agent, or teleoperation, is a common approach for collecting demonstrations.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, deep learning enables us to learn effective navigation policies from a large amount of experience. The recent development of high-performance navigation simulators, such as Habitat [5] and iGibson [6], has enabled researchers to develop large scale visual navigation algorithms [7], [8], [9], [10], [11], [12], [13], [14] for indoor environments. Wijmans et al [9] proposed an end-to-end vision-based reinforcement learning algorithm to train nearperfect agents that can navigate unseen indoor environments without access to the map by leveraging billions of simulation samples.…”
Section: A Visual Navigationmentioning
confidence: 99%