2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.329
|View full text |Cite
|
Sign up to set email alerts
|

DeepNav: Learning to Navigate Large Cities

Abstract: We present DeepNav, a Convolutional Neural Network (CNN) based algorithm for navigating large cities using locally visible street-view images. The DeepNav agent learns to reach its destination quickly by making the correct navigation decisions at intersections. We collect a large-scale dataset of street-view images organized in a graph where nodes are connected by roads. This dataset contains 10 city graphs and more than 1 million street-view images. We propose 3 supervised learning approaches for the navigati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(37 citation statements)
references
References 33 publications
0
37
0
Order By: Relevance
“…Deep RL has been successfully applied to the navigation problem for robots, including visual navigation with simplified navigation controllers [7], [14], [25], [50], [56], [64], more realistic controllers environments in gamelike environments [6], [13], [48], extracting navigation features from realistic environments [10], [23]. In the local planner setting similar to ours, differential drive robot with 1-d lidar sensing several approaches emerged recently using asynchronous DDPG [59], expert demonstrations [54], DDPG [42], and curriculum learning [62], and AutoRL [12].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep RL has been successfully applied to the navigation problem for robots, including visual navigation with simplified navigation controllers [7], [14], [25], [50], [56], [64], more realistic controllers environments in gamelike environments [6], [13], [48], extracting navigation features from realistic environments [10], [23]. In the local planner setting similar to ours, differential drive robot with 1-d lidar sensing several approaches emerged recently using asynchronous DDPG [59], expert demonstrations [54], DDPG [42], and curriculum learning [62], and AutoRL [12].…”
Section: Related Workmentioning
confidence: 99%
“…Recently, reinforcement learning (RL) agents [36] have solved complex robot control problems [61], generated trajectories under task constraints [20], demonstrated robustness to noise [19], and learned complex skills [51] [49], making them good choices to deal with task constraints. Many simple navigation tasks require only low-dimensional sensors and controls, like lidar and differential drive, and can be solved with easily trainable networks [63], [25], [7]. However, as we increase complexity of the problem by requiring longer episodes or providing only sparse rewards [18], RL agents become more difficult to train, and RL doesn't always transfer well to new environments [30] [29].…”
Section: Introductionmentioning
confidence: 99%
“…Some works [12], [13] utilized auxiliary tasks during training to improve navigation performance. Others either took the recurrent neural network (RNN) to represent the memory [4], [14]- [16] or predicted navigational actions directly from visual observations [8], [17], [18].…”
Section: A Visual Navigationmentioning
confidence: 99%
“…On the other hand, current DOBs might not have insufficient capability in estimating fast time-varying disturbances, since their convergence analysis often assume disturbances time-invariant. In addition, such separated processes of disturbance estimation, disturbance prediction, and control optimization might not be able to produce estimations and control signals that are mutual robust to each other and that jointly optimize performance, as evidenced in [6,28].…”
Section: Disturbance Rejectionmentioning
confidence: 99%