Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval 2012
DOI: 10.1145/2348283.2348293
|View full text |Cite
|
Sign up to set email alerts
|

Manhattan hashing for large-scale image retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
66
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 87 publications
(66 citation statements)
references
References 32 publications
0
66
0
Order By: Relevance
“…To do so, we test against three publicly available image datasets: 22k Labelme consisting of 22,019 images represented as 512 dimensional Gist descriptors [8]; CIFAR-10 a dataset of 60,000 images represented as 512 dimensional Gist descriptors; and 100k TinyImages a collection consisting of 100,000 images, represented by 384 dimensional Gist descriptors, randomly sub-sampled from the original 80 million tiny images dataset. The datasets and and associated features are identical to that used in previous related work [5] [4]. This ensures that our results are directly comparable to previously published figures.…”
Section: Datasetsmentioning
confidence: 58%
See 3 more Smart Citations
“…To do so, we test against three publicly available image datasets: 22k Labelme consisting of 22,019 images represented as 512 dimensional Gist descriptors [8]; CIFAR-10 a dataset of 60,000 images represented as 512 dimensional Gist descriptors; and 100k TinyImages a collection consisting of 100,000 images, represented by 384 dimensional Gist descriptors, randomly sub-sampled from the original 80 million tiny images dataset. The datasets and and associated features are identical to that used in previous related work [5] [4]. This ensures that our results are directly comparable to previously published figures.…”
Section: Datasetsmentioning
confidence: 58%
“…In contrast, vanilla LSH thresholds at zero and assigns a single bit per hyperplane. Existing multiple-bit quantisation schemes seek to position the thresholds based solely on information within the projected space, for example by using a k-means clustering on each projected dimension [5] [4]. In this work we show that the affinity between the data points in the original space can be a valuable signal for optimal threshold positioning.…”
Section: Neighbourhood Preserving Quan-tisationmentioning
confidence: 86%
See 2 more Smart Citations
“…Kulis et al have extended LSH functions to a learned metric [33], which can also be considered as a supervised method. Beside these methods, several other hashing methods have been proposed to address different aspects of the modelling and computation, including semantic hashing [34], random maximum margin hashing [35], Manhattan hashing [36], dual-bit quantization hashing [37], spherical hashing [38] and k-means hashing [39].…”
Section: Related Workmentioning
confidence: 99%