2006
DOI: 10.1007/s10479-006-7391-0
|View full text |Cite
|
Sign up to set email alerts
|

Optimization of code book in vector quantization

Abstract: In this paper, we present a novel approach to design a code book for vector quantization using standard deviation. The proposed algorithm optimizes the partitioning space to explore the search space for a set of equally viable and equivalent partitions. Essentially the partition space is partitioned into perceptive clusters, so that the code book is optimized. The proposed algorithm is proved better than the widely used quantization algorithm in applications.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2008
2008
2018
2018

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…Since for our data an optimum code book was estimated to have code words, we used a standard heuristic and applied the k -means algorithm with k  = 20 over the space of N P  = 31609 contact vectors to identify the elements in A * . Cluster algorithms like k -means approximate a given set of many feature vectors by a much smaller number of representative vectors [28]. Algorithmic convergence was reached rapidly and resulted in a set of twenty code words A *  = { a * 1 , …, a * 20 }, where each a * i ∈ A * was a single contact vector.…”
Section: Methodsmentioning
confidence: 99%
“…Since for our data an optimum code book was estimated to have code words, we used a standard heuristic and applied the k -means algorithm with k  = 20 over the space of N P  = 31609 contact vectors to identify the elements in A * . Cluster algorithms like k -means approximate a given set of many feature vectors by a much smaller number of representative vectors [28]. Algorithmic convergence was reached rapidly and resulted in a set of twenty code words A *  = { a * 1 , …, a * 20 }, where each a * i ∈ A * was a single contact vector.…”
Section: Methodsmentioning
confidence: 99%
“…This section will elaborate on the possibility of taking advantage of the extra information by using a more flexible approach. Artificial neural networks (ANN) have been widely used as universal function approximators and, as another data mining techniques, have found plenty of room in operations research (Thangavel and Kumar 2006;Teodorovic et al 2006;Schwardt and Fischer 2009;Jiao et al 2009;Menache et al 2005;Hao et al 2004;Kainen et al 2001;Józefowska et al 2001;Ivanova and Tagarev 2000;Perantonis et al 2000;Charalambous et al 2000). Their most common architecture for that purpose consists of an input layer that captures the values of the different predictors, a number of hidden layers that interconnect the entire set of values of the predictors, and an output layer that condenses the different signals to produce the estimates.…”
Section: Artificial Neural Network Modelmentioning
confidence: 99%
“…Ref. [7] resolves the uncertain question based on the use of standard deviation of each partition cell. As the splitting method, they all suffer from a serious complexity barrier that greatly limits their practical use.…”
Section: Previous Workmentioning
confidence: 99%
“…Ref. [7] proposed a modified splitting method. They use standard deviation of each Voronoi cell as the splitting condition and then design a VQ codebook with the partitioning space of equally viable and equivalent partitions.…”
Section: Introductionmentioning
confidence: 99%