2018
DOI: 10.1109/lra.2018.2859916
|View full text |Cite
|
Sign up to set email alerts
|

Learning Context Flexible Attention Model for Long-Term Visual Place Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
99
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 105 publications
(99 citation statements)
references
References 39 publications
0
99
0
Order By: Relevance
“…Other than environmental variations, texture-less and low-informative scenes also pose difficulty to place matching. We show examples of all these challenges taken from public VPR datasets [2], [3], [4] in Fig. 1.…”
Section: Introductionmentioning
confidence: 99%
“…Other than environmental variations, texture-less and low-informative scenes also pose difficulty to place matching. We show examples of all these challenges taken from public VPR datasets [2], [3], [4] in Fig. 1.…”
Section: Introductionmentioning
confidence: 99%
“…Chen et al in [7] presented a VPR approach that identifies pivotal landmarks by directly extracting prominent patterns based on responses of late convolutional layers of deep object-centric VGG-16 model. Recently, Chen et al in [8] introduced a contextflexible attention model and combined it with a pre-trained objectcentric VGG-16 fine-tuned on SPED [24] to learn more powerful condition-invariant regional features. The system has shown state-ofthe-art performance on severe condition-variant datasets.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Using the learned codebook, F L regions of benchmark test / reference traverses are quantized in (7) to predict the clusters or labels Z L , where α is the quantization function. Employing regions-based F L feature, predicted labels Z L and regional codebook C L , summed residue v corresponding to each u th region can be retrieved using (8).…”
Section: Regional Vocabulary and Extraction Of Vlad For Image Matcmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite their success, these approaches typically rely on the usage of CNN models which are pretrained on various computer vision datasets [41] using millions of images [5], [6], [38]. Training CNN models in an end-to-end fashion specifically for VPR have also recently been proposed [4], [38], [42]. However, they are still using common network architectures, i.e., AlexNet [43], VGG [44], ResNet [45], with slight changes to perform VPR.…”
Section: B Deep Neural Network For Visual Place Recognitionmentioning
confidence: 99%