2017
DOI: 10.1007/s12530-017-9195-7
|View full text |Cite
|
Sign up to set email alerts
|

A new type of distance metric and its use for clustering

Abstract: Abstract-In order to address high dimensional problems, a new 'direction-aware' metric is introduced in this paper. This new distance is a combination of two components: i) the traditional Euclidean distance and ii) an angular/directional divergence, derived from the cosine similarity. The newly introduced metric combines the advantages of the Euclidean metric and cosine similarity, and is defined over the Euclidean space domain. Thus, it is able to take the advantage from both spaces, while preserving the Euc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 28 publications
(16 citation statements)
references
References 25 publications
0
16
0
Order By: Relevance
“…Thus, the next step is to measure the distance between vectors. Then, 2) To obtain the proximity between objects, we combined two measures, namely, Euclidean and cosine, as these combined measures were proven to outperform other measures in clustering problems [28] and empirically proven also to obtain better results in this study. Because the cosine measure represents similarity and the Euclidean measure represents the distance between objects, we turn the Euclidean distance measure into a similarity measure by using the following adequacy:…”
Section: A Partitional Clustering Representationmentioning
confidence: 87%
“…Thus, the next step is to measure the distance between vectors. Then, 2) To obtain the proximity between objects, we combined two measures, namely, Euclidean and cosine, as these combined measures were proven to outperform other measures in clustering problems [28] and empirically proven also to obtain better results in this study. Because the cosine measure represents similarity and the Euclidean measure represents the distance between objects, we turn the Euclidean distance measure into a similarity measure by using the following adequacy:…”
Section: A Partitional Clustering Representationmentioning
confidence: 87%
“…We have used the Euclidean distance because it is currently the most frequently used metric space for the established clustering algorithms [30].…”
Section: Incremental Dbscan Clusteringmentioning
confidence: 99%
“…Based on these prototypes, the corresponding fuzzy rules are generated. Because of the very high dimensionality of the feature vectors, we use cosine dissimilarity as the distance measure, which is given below [30]: It has been demonstrated in [30] that the cosine dissimilarity between the original vectors of the global features is equivalent to the Euclidean distance between the vectors normalised by their norms (  x x x ) as described in equation 3. This is important because it facilitates the computational efficiency by allowing recursive calculation.…”
Section: Algorithmmentioning
confidence: 99%