2009 IEEE International Conference on Acoustics, Speech and Signal Processing 2009
DOI: 10.1109/icassp.2009.4959710
|View full text |Cite
|
Sign up to set email alerts
|

Compression of image patches for local feature extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
30
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 45 publications
(30 citation statements)
references
References 7 publications
0
30
0
Order By: Relevance
“…CHOG, at the rate of 8 bytes per descriptor, can achieve matching capabilities comparable to the uncompressed features. In [15], the local image patch around the interest point is compressed and sent over the network at low bit rate, which also shifts the workload of descriptor computation to the server side. Unlike the other approaches, the retrieval system in [16] sends a tree histogram in place of individual descriptors, which enables significant additional rate reduction.…”
Section: Rate-efficient Image Retrievalmentioning
confidence: 99%
“…CHOG, at the rate of 8 bytes per descriptor, can achieve matching capabilities comparable to the uncompressed features. In [15], the local image patch around the interest point is compressed and sent over the network at low bit rate, which also shifts the workload of descriptor computation to the server side. Unlike the other approaches, the retrieval system in [16] sends a tree histogram in place of individual descriptors, which enables significant additional rate reduction.…”
Section: Rate-efficient Image Retrievalmentioning
confidence: 99%
“…transferred to the server. This allows for a more than fivefold rate reduction when compared to compressing features as proposed in [MCCT09] and thus a significant reduction of the overall query time. However, this approach requires performing the quantization of descriptors vectors into visual words on the mobile device at very low complexity to cope with the limited processing power as well as to avoid draining the battery.…”
Section: Multiple Hypothesis Vocabulary Treementioning
confidence: 97%
“…In Chandrasekhar et al (2009), the authors studied dimensionality reduction of SIFT and SURF descriptors using KLT but followed by an entropy coding. Discrete Cosine Transform (Chadha et al 2011;Schwerin and Paliwal 2008;Makar et al 2009) and Discrete Wavelet Transform (Grzegorzek et al 2010;Lim et al 2009) were also proposed as feature quantization methods, but they did not perform results comparable to other state-of-the-art methods.…”
Section: Related Workmentioning
confidence: 97%