2022
DOI: 10.48550/arxiv.2203.03682
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…There is also some work showing the use of a pretrained visual encoder can improve the performance of navigation agents in real environments. In [ 29 ], a pretrained DINO [ 30 ] was used as the visual encoder and fine-tuned on 70 RGB images with coarse semantic segmentation labels collected in a real environment. The results show that the robot was able to perform the visual navigation task well in a real environment.…”
Section: Related Workmentioning
confidence: 99%
“…There is also some work showing the use of a pretrained visual encoder can improve the performance of navigation agents in real environments. In [ 29 ], a pretrained DINO [ 30 ] was used as the visual encoder and fine-tuned on 70 RGB images with coarse semantic segmentation labels collected in a real environment. The results show that the robot was able to perform the visual navigation task well in a real environment.…”
Section: Related Workmentioning
confidence: 99%