2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8794178
|View full text |Cite
|
Sign up to set email alerts
|

Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance using Single-View Depth Estimation

Abstract: Visual place recognition (VPR) -the act of recognizing a familiar visual place -becomes difficult when there is extreme environmental appearance change or viewpoint change. Particularly challenging is the scenario where both phenomena occur simultaneously, such as when returning for the first time along a road at night that was previously traversed during the day in the opposite direction. While such problems can be solved with panoramic sensors, humans solve this problem regularly with limited field of view v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 55 publications
0
18
0
Order By: Relevance
“…Radenovic et al [42] proposes a pipeline to obtain large 3D scene reconstructions from unordered images and uses these 3D reconstructions as ground truth for training a Generalized Mean (GeM) layer with hard positive and negative mining. Garg et al [18], on the other hand, uses single-view depth predictions to recognize places revisited from opposite directions. Also, addressing extreme viewpoint changes, Garg et al [19] suggests semantically aggregating salient visual information.…”
Section: Related Workmentioning
confidence: 99%
“…Radenovic et al [42] proposes a pipeline to obtain large 3D scene reconstructions from unordered images and uses these 3D reconstructions as ground truth for training a Generalized Mean (GeM) layer with hard positive and negative mining. Garg et al [18], on the other hand, uses single-view depth predictions to recognize places revisited from opposite directions. Also, addressing extreme viewpoint changes, Garg et al [19] suggests semantically aggregating salient visual information.…”
Section: Related Workmentioning
confidence: 99%
“…To this end, place recognition approaches rely on various mapping frameworks and map update mechanisms. Regarding the mapping frameworks, three main approaches are highlighted: database [26], [27], topological [28]- [30] or topologicalmetric [31], [32].…”
Section: B How Are Places Recognized and Remembered?mentioning
confidence: 99%
“…Hierarchical methods combine multiple matching approaches in a single place recognition framework. For example, the coarse-to-fine architecture [27], [52] selects top candidates in a coarse tier, and from those, selects the best match in a fine tier.…”
Section: Pre-trained Frameworkmentioning
confidence: 99%
“…Estimating depth from RGB images is a challenging problem which finds applications in a wide range of fields such as augmented reality [30], 3D reconstruction [16], self-driving cars [19], place recognition [11], etc. The recent success of deep learning methods has spurred the research in this field leading to the creation of several new benchmarks that now outperform traditional methods which rely on handcrafted features and exploit camera geometry and/or camera motion for depth and pose estimation from monocular or stereo sequence of images (video).…”
Section: Introductionmentioning
confidence: 99%
“…The modular encoder-decoder architecture provides the flexibility of using the encoder module as a feature extractor to extract or select useful features from input images. Such feature extractors are used in several applications such as pose estimation [18], Visual Place Recognition (VPR) [12] [11], object detection [41] and segmentation [6]. We demonstrate one such application where the adapted features obtained from our encoder module are shown to provide superior place recognition accuracy compared to other state-of-the-art feature representations available in the literature.…”
Section: Introductionmentioning
confidence: 99%