2004
DOI: 10.1016/j.infsof.2003.07.003
|View full text |Cite
|
Sign up to set email alerts
|

FINDIT: a fast and intelligent subspace clustering algorithm using dimension voting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
52
0

Year Published

2007
2007
2015
2015

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 103 publications
(54 citation statements)
references
References 11 publications
0
52
0
Order By: Relevance
“…Despite the demonstrated deficiency of conventional L p norms for high-dimensional data, a plethora of work based on the Euclidean distance has been dedicated to clustering strategies, which appear to be effective in practice to varying degrees for high-dimensional data [5]. Many heuristics have recently been proposed or evaluated for clustering [6][7][8][9][10][11][12][13][14], outlier detection [15][16][17][18], and indexing or similarity search [6,[19][20][21][22][23]] that seek to mitigate the effects of the curse of dimensionality. While some of these strategies, such as projected or subspace clustering, do recognize implicitly the effect of relevant versus irrelevant attributes for a cluster, all these papers (as well as others) abstain from discussing these effects, let alone studying them in detail.…”
Section: Introductionmentioning
confidence: 99%
“…Despite the demonstrated deficiency of conventional L p norms for high-dimensional data, a plethora of work based on the Euclidean distance has been dedicated to clustering strategies, which appear to be effective in practice to varying degrees for high-dimensional data [5]. Many heuristics have recently been proposed or evaluated for clustering [6][7][8][9][10][11][12][13][14], outlier detection [15][16][17][18], and indexing or similarity search [6,[19][20][21][22][23]] that seek to mitigate the effects of the curse of dimensionality. While some of these strategies, such as projected or subspace clustering, do recognize implicitly the effect of relevant versus irrelevant attributes for a cluster, all these papers (as well as others) abstain from discussing these effects, let alone studying them in detail.…”
Section: Introductionmentioning
confidence: 99%
“…In top down approaches, the user defines the number of clusters and the number of relevant subspaces [30,31]. It is not possible to automatically find all possible clusters in all the subspaces using this method.…”
Section: Motivating Examplesmentioning
confidence: 99%
“…Top down approaches like PROCLUS [30] and FINDIT [31] use projected clustering for high-dimensional data. An initial approximation of the number of the clusters and the relevant subspaces are given by the user.…”
Section: Background and Literature Reviewmentioning
confidence: 99%
“…However, the subspace in which each cluster is embedded is not explicitly known from the algorithm. The FINDIT (fast and intelligent subspace clustering algorithm using dimension voting) [31] algorithm, uses a dimension voting technique to find subspace clusters. Dimension oriented distance is defined to measure the distance between points based on not only the value information but also the dimension information.…”
Section: Related Workmentioning
confidence: 99%