2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01581
|View full text |Cite
|
Sign up to set email alerts
|

Auxiliary Tasks and Exploration Enable ObjectGoal Navigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
64
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 87 publications
(66 citation statements)
references
References 13 publications
1
64
0
1
Order By: Relevance
“…In our experiments, we use offline datasets of size 20K images or less in which the "annotations" are actually automatic object detections. This is several orders of magnitude smaller than the amount of interactions usually needed to train a targetspecific policy (tens to hundreds of millions) [45,69].…”
Section: Joint Goal Embedding Learningmentioning
confidence: 97%
“…In our experiments, we use offline datasets of size 20K images or less in which the "annotations" are actually automatic object detections. This is several orders of magnitude smaller than the amount of interactions usually needed to train a targetspecific policy (tens to hundreds of millions) [45,69].…”
Section: Joint Goal Embedding Learningmentioning
confidence: 97%
“…Termination criterion. We aim for complete map construction, however, in the iGibson environment, all the robots have physical bodies and get stuck occasionally [47]. Hence following [16], our algorithm terminates when there is no accessible frontier in the environment.…”
Section: Methodsmentioning
confidence: 99%
“…Perhaps most related to our work is the work of Ye [46], who show that using auxiliary tasks can improve PointGoal navigation results in the Gibson environment [44]. Differ-…”
Section: Related Workmentioning
confidence: 95%