2019
DOI: 10.48550/arxiv.1910.13249
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Navigation Agents for the Visually Impaired: A Sidewalk Simulator and Experiments

Martin Weiss,
Simon Chamorro,
Roger Girgis
et al.

Abstract: Millions of blind and visually-impaired (BVI) people navigate urban environments everyday, using smartphones for high-level path-planning and white canes or guide dogs for local information. However, many BVI people still struggle to travel to new places. In our endeavour to create a navigation assistant for the BVI, we found that existing Reinforcement Learning (RL) environments were unsuitable for the task. This work introduces SEVN, a sidewalk simulation environment and a neural network-based approach to cr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…For example, building on R2R/Matterport3D [13,21], annotations for vision-and-dialog navigation [35], asking for help [36], remote embodied referring expressions [37], and multilingual VLN [38,39] have been released. In the outdoor setting, several panoramic image datasets have been proposed including StreetLearn [40,41] and SEVN [42], giving rise to language navigation datasets such as TouchDown [43], Talk2Nav [44] and RUN [45]. With the increasing interest in training embodied agents in panoramic image environments, there is an urgent need to investigate the transfer of these agents to real physical platforms.…”
Section: Related Workmentioning
confidence: 99%
“…For example, building on R2R/Matterport3D [13,21], annotations for vision-and-dialog navigation [35], asking for help [36], remote embodied referring expressions [37], and multilingual VLN [38,39] have been released. In the outdoor setting, several panoramic image datasets have been proposed including StreetLearn [40,41] and SEVN [42], giving rise to language navigation datasets such as TouchDown [43], Talk2Nav [44] and RUN [45]. With the increasing interest in training embodied agents in panoramic image environments, there is an urgent need to investigate the transfer of these agents to real physical platforms.…”
Section: Related Workmentioning
confidence: 99%