2008 46th Annual Allerton Conference on Communication, Control, and Computing 2008
DOI: 10.1109/allerton.2008.4797555
|View full text |Cite
|
Sign up to set email alerts
|

Random projection trees for vector quantization

Abstract: A simple and computationally efficient scheme for tree-structured vector quantization is presented. Unlike previous methods, its quantization error depends only on the intrinsic dimension of the data distribution, rather than the apparent dimension of the space in which the data happen to lie.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
42
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(42 citation statements)
references
References 14 publications
0
42
0
Order By: Relevance
“…It is NP-hard to find optimal clusterings even for two clusters [8], [9]. Therefore, dimensionality reduction methods have been extensively studied in the literature to reduce the number of dimensions.…”
Section: Related Workmentioning
confidence: 99%
“…It is NP-hard to find optimal clusterings even for two clusters [8], [9]. Therefore, dimensionality reduction methods have been extensively studied in the literature to reduce the number of dimensions.…”
Section: Related Workmentioning
confidence: 99%
“…The principal function of algorithm involves finding the k-means. First, an initial set of means is defined and then subsequent classification is based on their distances to the centres [6]. Next, the clusters' mean is computed again and then reclassification is done based on the new set of means.…”
Section: K-means Clustering Techniquementioning
confidence: 99%
“…It could be concluded that, it was a group which liked to use credit cards, spent more freely, believed in women power, believed in economics rather than politics and felt quality products could be worth purchasing. Also, they seemed to have taste of modern life style and were fashion oriented.Cluster-4 gave out an analysis that the variables 2, 4, 5, 7, 10 belong to this cluster, had opposite statistical characteristics to the variable 1, 3,6,8,11,12,13 and were neutral in comparison to variables 14, 15. It was concluded that, this group was optimistic, free spending and a good target for TV advertising, particularly consumer durables items and entertainment.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Even though the number of clusters is small, the problem of finding an optimal k-means algorithm solution is NP-hard [2,3]. For this reason, a k-means algorithm adapts heuristics and finds local minimum as approximate optimal solutions.…”
Section: Introductionmentioning
confidence: 99%