2016 Data Compression Conference (DCC) 2016
DOI: 10.1109/dcc.2016.23
|View full text |Cite
|
Sign up to set email alerts
|

Tiny Descriptors for Image Retrieval with Unsupervised Triplet Hashing

Abstract: A typical image retrieval pipeline starts with the comparison of global descriptors from a large database to find a short list of candidate matches. A good image descriptor is key to the retrieval pipeline and should reconcile two contradictory requirements: providing recall rates as high as possible and being as compact as possible for fast matching. Following the recent successes of Deep Convolutional Neural Networks (DCNN) for large scale image classification, descriptors extracted from DCNNs are increasing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 29 publications
(53 reference statements)
0
15
0
Order By: Relevance
“…number of nearest centroids n for the m-k-meansn 1 method, and Hamming distance threshold (H). The proposed approaches are compared with several state-of-the-art methods, among which the recent UTH method [31]; while some of these methods were originally proposed for engineered features, they have been evaluated on CNN features (results reported from [31]). Retrieval is performed using the coarse-to-fine approach used in [24], where the hash is used to select a candidate list of images and CNN descriptors are used to re-rank this list.…”
Section: H Results On Inria Holidays Oxford 5k and Paris 6kmentioning
confidence: 99%
See 2 more Smart Citations
“…number of nearest centroids n for the m-k-meansn 1 method, and Hamming distance threshold (H). The proposed approaches are compared with several state-of-the-art methods, among which the recent UTH method [31]; while some of these methods were originally proposed for engineered features, they have been evaluated on CNN features (results reported from [31]). Retrieval is performed using the coarse-to-fine approach used in [24], where the hash is used to select a candidate list of images and CNN descriptors are used to re-rank this list.…”
Section: H Results On Inria Holidays Oxford 5k and Paris 6kmentioning
confidence: 99%
“…In this experiment (Table V) we compare the behavior of retrieval performance for different lengths of the hash code (for m-kmeans-t 1 ) and for different values of n nearest neighbors (for [60] 0.381 0.225 -PCAHash [54] 0.528 0.239 -LSH [61] 0.431 0.239 -SKLSH [62] 0.241 0.134 -SH [2] 0.522 0.232 -SRBM [63] 0.516 0.212 -UTH [31] 0.571 0.240 -m-k-means-n 1 (n = m-kmeans-n 1 ). Experiments were made for SIFT1M for different values of recall@R. Fig.…”
Section: Results Varying Hash Code Length and Nmentioning
confidence: 99%
See 1 more Smart Citation
“…Hashing, which uses mapping functions to transform a high-dimensional feature vector into a compact and expressive binary codes [1,2,3], has shown significant success for fast image retrieval. In recent years, with the rapid development of Convolutional Neural Network (CNN), several CNNbased hashing methods [4,5,6,7,8,9,10] have been proposed and demonstrated promising results. In particular, unsupervised hashing learning has recently received increasing attention because it does not require labeled training data thus making the methods widely applicable.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, unsupervised hashing learning has recently received increasing attention because it does not require labeled training data thus making the methods widely applicable. The earliest studies use stacked Restricted Boltzmann Machines (RBMs) to encode binary codes [8,9] for unsupervised hashing. However, RBMs are complex and require pre-training, which are not efficient for practical applications.…”
Section: Introductionmentioning
confidence: 99%