2013 IEEE Conference on Computer Vision and Pattern Recognition 2013
DOI: 10.1109/cvpr.2013.211
|View full text |Cite
|
Sign up to set email alerts
|

Query Adaptive Similarity for Large Scale Object Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
47
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 73 publications
(48 citation statements)
references
References 17 publications
1
47
0
Order By: Relevance
“…While the average distance between a word to its neighbors is regularized to be almost constant in [92], the idea of democratizing the contribution of individual embeddings has later been employed in [18]. In [20], Tolias et al show that VLAD and HE share similar natures and propose a new match kernel which trades off between local feature aggregation and feature-to-feature matching, using a similar matching function to [91]. They also demonstrate that using more bits (e.g., 128) in HE is superior to the original 64 bits scheme at the cost of decreased efficiency.…”
Section: Hamming Embedding and Its Improvementsmentioning
confidence: 99%
See 2 more Smart Citations
“…While the average distance between a word to its neighbors is regularized to be almost constant in [92], the idea of democratizing the contribution of individual embeddings has later been employed in [18]. In [20], Tolias et al show that VLAD and HE share similar natures and propose a new match kernel which trades off between local feature aggregation and feature-to-feature matching, using a similar matching function to [91]. They also demonstrate that using more bits (e.g., 128) in HE is superior to the original 64 bits scheme at the cost of decreased efficiency.…”
Section: Hamming Embedding and Its Improvementsmentioning
confidence: 99%
“…It exploits the vector-to-hyperplane distance while retaining the efficiency of the inverted index. Further, Qin et al [91] design a higher-order match kernel within a probabilistic framework and adaptively normalize the local feature distances by the distance distribution of false matches. This method is in the spirit similar to [92], in which the word-word distance, instead of the feature-feature distance [91], is normalized, according to the neighborhood distribution of each visual word.…”
Section: Hamming Embedding and Its Improvementsmentioning
confidence: 99%
See 1 more Smart Citation
“…All relevant learning-based approaches fall into one or both of the following two categories: (i) learning for an auxiliary task (e.g. some form of distinctiveness of local features [4,15,30,35,58,59,90]), and (ii) learning on top of shallow hand-engineered descriptors that cannot be finetuned for the target task [2,9,24,35,57]. Both of these are in spirit opposite to the core idea behind deep learning that has provided a major boost in performance in various recognition tasks: end-to-end learning.…”
Section: Related Workmentioning
confidence: 99%
“…Second, during matching verification, the Hamming distance between two binary features can be efficiently calculated via xor operations, while the Euclidean distance between floating-point vectors is very expensive to compute. Previous work of this line includes Hamming Embedding (HE) [1] and its variants [10], [11], which use binary SIFT features for verification. Meanwhile, binary features also include spatial context [12], heterogeneous feature such as color [13], etc.…”
mentioning
confidence: 99%