2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341703
|View full text |Cite
|
Sign up to set email alerts
|

Online Visual Place Recognition via Saliency Re-identification

Abstract: As an essential component of visual simultaneous localization and mapping (SLAM), place recognition is crucial for robot navigation and autonomous driving. Existing methods often formulate visual place recognition as feature matching, which is computationally expensive for many robotic applications with limited computing power, e.g., autonomous driving and cleaning robot. Inspired by the fact that human beings always recognize a place by remembering salient regions or landmarks that are more attractive or inte… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 34 publications
0
12
0
Order By: Relevance
“…The works in the first category focus on the localization on fine granularity scales, such as landmarks [3,5,46,49] or at city-scale granularity [1,17,25,39,44]. In general, the solutions that are employed for such problems are based on retrieval systems that match the query images with ones from a background collection and then apply a post-processing scheme to estimate the final location.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The works in the first category focus on the localization on fine granularity scales, such as landmarks [3,5,46,49] or at city-scale granularity [1,17,25,39,44]. In general, the solutions that are employed for such problems are based on retrieval systems that match the query images with ones from a background collection and then apply a post-processing scheme to estimate the final location.…”
Section: Related Workmentioning
confidence: 99%
“…a particular kind of scenes, e.g., landmarks [3,5,46], to be from a particular area [1,17,44], or on whether we are using different inputs, e.g., a sequence of images per scene [2,31,32], or aerial imagery [24,34,43]. In this study, we focus on global-scale location estimation from single images, which is the most challenging problem setting.…”
mentioning
confidence: 99%
“…In [45], an unsupervised semi-semantic auto-encoder model DeepLab_AE was designed to obtain semantic features of scenes. The authors of [46] proposed a place recognition method via re-identification of salient objects. Oh et al [47] performed loop detection by matching graphs of detected objects.…”
Section: Semantic-information-based Loop Closure Detectionmentioning
confidence: 99%
“…DBoW2 [34], OpenFABMAP [33], SRLCD [46], and BoWT-LCD [39] were selected to complete comparative experiments with SVG-Loop. DBoW2 and OpenFABMAP are the most popular and practical visual-based methods in loop closure detection.…”
Section: Outdoor Datasetmentioning
confidence: 99%
“…For example, in autonomous exploration, an efficient and robust object encoding benefits the decision process when a robot revisits a specific landmark object [1]. Without the capability of object re-identification, a semantic SLAM system may easily drift and subsequently leads to an unreliable localization [2]. However, existing object encoding methods easily produce false matches due to viewpoint or scaling changes, hence a robust and efficient object encoding method is necessary for many robotic applications.…”
Section: Introductionmentioning
confidence: 99%