2022
DOI: 10.48550/arxiv.2202.02440
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation

Abstract: In reinforcement learning for visual navigation, it is common to develop a model for each new task, and train that model from scratch with task-specific interactions in 3D environments. However, this process is expensive; massive amounts of interactions are needed for the model to generalize well. Moreover, this process is repeated whenever there is a change in the task type or the goal modality. We present a unified approach to visual navigation using a novel modular transfer learning model. Our model can eff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
12
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(13 citation statements)
references
References 42 publications
1
12
0
Order By: Relevance
“…During pre-processing, we further augment the dataset by sampling goal-images at four evenly-spaced heading angles to produce 36M total episodes for training. Sampling at multiple angles approximates the randomized sampling used in [18].…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…During pre-processing, we further augment the dataset by sampling goal-images at four evenly-spaced heading angles to produce 36M total episodes for training. Sampling at multiple angles approximates the randomized sampling used in [18].…”
Section: Methodsmentioning
confidence: 99%
“…We perform large-scale experiments on three ObjectNav datasets -Gibson [4], MP3D [8], and HM3D [20]. Our zero-shot agent (that has not seen a single 3D semantic annotation or ObjectNav training episode) achieves a 31.3% success in Gibson environments, which is a 20.0% absolute improvement over previous zero-shot results [18]. In MP3D, our agent achieves 15.3% success, a 4.2% absolute gain over existing zero-shot methods [21].…”
Section: Zero-shot Objectnavmentioning
confidence: 99%
See 3 more Smart Citations