Proceedings of the First ACM Workshop on Large-Scale Multimedia Retrieval and Mining 2009
DOI: 10.1145/1631058.1631075
|View full text |Cite
|
Sign up to set email alerts
|

An efficient key point quantization algorithm for large scale image retrieval

Abstract: We focus on the problem of large-scale near duplicate image retrieval. Recent studies have shown that local image features, often referred to as key points, are effective for near duplicate image retrieval. The most popular approach for key point based image matching is the clustering-based bagof-words model. It maps each key point to a visual word in a code-book that is constructed by a clustering algorithm, and represents each image by a histogram of visual words. Despite its success, there are two main shor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(24 citation statements)
references
References 25 publications
0
24
0
Order By: Relevance
“…Comparison is conducted with seven recent retrieval algorithms on local feature quantization, including visual vocabulary tree [4], Hamming embedding [6], soft assignment [5], binary SIFT [21], product quantization with inverted file structure [16], vector of locally aggregated descriptor [22], and random seeding approach [23]. The experiments demonstrate that our method achieves competitive performance in terms of accuracy, efficiency, and memory usage.…”
Section: Introductionmentioning
confidence: 99%
“…Comparison is conducted with seven recent retrieval algorithms on local feature quantization, including visual vocabulary tree [4], Hamming embedding [6], soft assignment [5], binary SIFT [21], product quantization with inverted file structure [16], vector of locally aggregated descriptor [22], and random seeding approach [23]. The experiments demonstrate that our method achieves competitive performance in terms of accuracy, efficiency, and memory usage.…”
Section: Introductionmentioning
confidence: 99%
“…SIFT was chosen to find a "visually similar" image, hence improving the image retrieval accuracy with respect to just using low level attributes. SIFT was also combined with a Bag-Of-Words (BOW) model in [6], [7], and [8]. In [6] the computational complexity of SIFT feature clustering was examined when the features are quantized in the BOW model.…”
Section: Review Of Existing Methodsmentioning
confidence: 99%
“…SIFT was also combined with a Bag-Of-Words (BOW) model in [6], [7], and [8]. In [6] the computational complexity of SIFT feature clustering was examined when the features are quantized in the BOW model. Instead of using clustering, a random seed algorithm was proposed to make the quantization process faster and more accurate.…”
Section: Review Of Existing Methodsmentioning
confidence: 99%
“…Since these descriptors are high dimensional vectors such as 128 dimensions with larger computational costs, Sivic and Zisserman [7] quantized the SIFT descriptors by a single depth based k-means tree to achieve run-time object retrieval throughout a movie database. These quantized descriptors are usually called visual words, which are generated by a vocabulary tree [2], [8], [9], [10]. Visual words are collected to depict an object and exploited to a classification of objects by ignoring a position of each visual word [11].…”
Section: Related Workmentioning
confidence: 99%