2015 Data Compression Conference 2015
DOI: 10.1109/dcc.2015.54
|View full text |Cite
|
Sign up to set email alerts
|

Compact Global Descriptors for Visual Search

Abstract: The first step in an image retrieval pipeline consists of comparing global descriptors from a large database to find a short list of candidate matching images. The more compact the global descriptor, the faster the descriptors can be compared for matching. State-of-the-art global descriptors based on Fisher Vectors are represented with tens of thousands of floating point numbers. While there is significant work on compression of local descriptors, there is relatively little work on compression of high dimensio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 24 publications
0
18
0
Order By: Relevance
“…In this experiment (Table V) we compare the behavior of retrieval performance for different lengths of the hash code (for m-kmeans-t 1 ) and for different values of n nearest neighbors (for [60] 0.381 0.225 -PCAHash [54] 0.528 0.239 -LSH [61] 0.431 0.239 -SKLSH [62] 0.241 0.134 -SH [2] 0.522 0.232 -SRBM [63] 0.516 0.212 -UTH [31] 0.571 0.240 -m-k-means-n 1 (n = m-kmeans-n 1 ). Experiments were made for SIFT1M for different values of recall@R. Fig.…”
Section: Results Varying Hash Code Length and Nmentioning
confidence: 99%
“…In this experiment (Table V) we compare the behavior of retrieval performance for different lengths of the hash code (for m-kmeans-t 1 ) and for different values of n nearest neighbors (for [60] 0.381 0.225 -PCAHash [54] 0.528 0.239 -LSH [61] 0.431 0.239 -SKLSH [62] 0.241 0.134 -SH [2] 0.522 0.232 -SRBM [63] 0.516 0.212 -UTH [31] 0.571 0.240 -m-k-means-n 1 (n = m-kmeans-n 1 ). Experiments were made for SIFT1M for different values of recall@R. Fig.…”
Section: Results Varying Hash Code Length and Nmentioning
confidence: 99%
“…To evaluate the effect of network weights initialization in the pre-training stage, we present mAP results on Holidays dataset at output size 64 bits (see Figure 2(b)), for comparison (1) SRBM based network weights proposed in our previous work [15] with (2) random unit-norm network weights (denoted as UniW). In addition, we report the results combining the proposed UTH scheme with either SRBM (denoted as UTH SRBM) or UniW (denoted as UTH UniW).…”
Section: Resultsmentioning
confidence: 99%
“…Here, we progressively decrease the dimensionality of hidden layers by a factor of 2, and train several RBMs with varying number of hidden layers and output units to optimize parameters. More details are available in our previous work [15].…”
Section: Unsupervised Triplet Hashingmentioning
confidence: 99%
See 1 more Smart Citation
“…Holidays Oxford 5K Paris 6K ITQ [11] 53.68 23.00 -BPBC [10] 38.10 22.51 -PCAHash [11] 52.80 23.90 -LSH [6] 43.08 23.91 -SKLSH [26] 24.09 13.39 -SH [32] 52. 22 23.24 -SRBM [4] 51.58 21.23 -UTH [20] 57. 10 In the second experiment we evaluate a use case in which a database of images is queried with a large number of images that do not belong to it.…”
Section: Methodsmentioning
confidence: 99%