2017
DOI: 10.1109/lra.2017.2662061
|View full text |Cite
|
Sign up to set email alerts
|

SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
39
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 45 publications
(41 citation statements)
references
References 32 publications
1
39
0
Order By: Relevance
“…We demonstrate our approach on three benchmark datasets, which have been extensively tested in recent literature [10], [26], [27]. The datasets are Oxford RobotCar, Nordland, and Gardens Point Walking.…”
Section: Experimental Methodsmentioning
confidence: 99%
“…We demonstrate our approach on three benchmark datasets, which have been extensively tested in recent literature [10], [26], [27]. The datasets are Oxford RobotCar, Nordland, and Gardens Point Walking.…”
Section: Experimental Methodsmentioning
confidence: 99%
“…For quantitative evaluation and comparison, we use precision-recall curves as a metric following (Sunderhauf et al 2015;Zhang, Han, and Wang 2016), where high area under the curve means both high recall (relates to a low false positive rate) and high precision (relates to a low false negative rate). Inspired by the conclusion drawn by (Han et al 2017) that HOG features perform the best among other types of raw visual features, we extract HOG features from landmarks as the input to generate the integrated representation by the proposed method. As is shown in Fig.…”
Section: Study Of the Orthogonality Of The Solutions Of Our New Methodsmentioning
confidence: 99%
“…After obtaining the learned representation which integrates landmark and holistic information, we can calculate the matching scores by using cosine similarity between query image and each template image in the projected subspace (Naseer et al 2014;Han et al 2017), and then determine whether two locations are matched by comparing the score with a user-defined threshold. Compared with existing long-term place recognition methods that use either holistic information or semantic landmarks only, our new method is more advantageous since it learns an integrated representation that can capture both insights.…”
Section: Visual Place Recognition Via Integrated Image Representationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, most methods based on visual cues used global features, such as GIST (Latif et al 2014), HOG (Naseer et al 2014), and CNN (Sünderhauf et al 2015), to construct representations of the holistic scene in the robot view. Besides using a single type of features, several approaches integrated multiple types of features to encode places (Pronobis et al 2010;Han et al 2017).…”
Section: Representations For Loop Closure Detectionmentioning
confidence: 99%