2015
DOI: 10.1007/s00530-015-0470-9
|View full text |Cite
|
Sign up to set email alerts
|

Optimized residual vector quantization for efficient approximate nearest neighbor search

Abstract: and centroids are calculated. The lower bound is used to filter non-nearest centroids for the purpose of reducing computational costs. ERVQ is noticeably optimized in terms of time efficiency on quantizing vectors when combining with this method. To evaluate the accuracy that vectors are approximated by their quantization outputs, an ERVQbased exhaustive method for approximate nearest neighbor search is implemented. Experimental results on three datasets demonstrate that our approaches outperform the stateof-t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
28
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(28 citation statements)
references
References 33 publications
(47 reference statements)
0
28
0
Order By: Relevance
“…Thus, there are scalar quantization algorithms specifically designed to optimize evaluation metrics of the fields [20,21]. Furthermore, for high-dimensional vector quantization, there have been multiple codebook-based algorithms with various optimization strategies [22,23]. However, despite these developments in the area, to the best of the authors' knowledge, hitherto there has not been publication discussing scalar quantization algorithms as sparse least square optimization.…”
Section: Related Workmentioning
confidence: 99%
“…Thus, there are scalar quantization algorithms specifically designed to optimize evaluation metrics of the fields [20,21]. Furthermore, for high-dimensional vector quantization, there have been multiple codebook-based algorithms with various optimization strategies [22,23]. However, despite these developments in the area, to the best of the authors' knowledge, hitherto there has not been publication discussing scalar quantization algorithms as sparse least square optimization.…”
Section: Related Workmentioning
confidence: 99%
“…However, the contribution of each layer to the minimization of the quantization error is not the same, as the last layers contribute much less than the first ones. This algorithm has been extended by several other methods [17], [18].…”
Section: Residual Vector Quantizationmentioning
confidence: 99%
“…The authors introduce several layers of quantization, as each layer quantizes the residuals of the previous layer. Improvements on this method have been proposed in [17], [18]. In [19], Babenko et al propose a method again based on quantization by addition of several codevectors, yet they remove the hierarchy between layers of residual quantization, providing a high level of freedom for the quantization codevectors.…”
mentioning
confidence: 99%
“…Data clustering is a fundamental data analysis tool in the area of data mining [9,10], pattern recognition [11,12,41], image analysis [47,48], feature extraction [13,14], vector quantization [15,16], image segmentation [17,18], function approximation [19,20], dimensionality reduction [49,50] and big data analysis [21,22]. It is a process of revealing inherent structures present in a set of objects based on proximity measure and competitive learning.…”
Section: Introductionmentioning
confidence: 99%