2012 11th International Conference on Information Science, Signal Processing and Their Applications (ISSPA) 2012
DOI: 10.1109/isspa.2012.6310472
|View full text |Cite
|
Sign up to set email alerts
|

CMUNE: A clustering using mutual nearest neighbors algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 4 publications
0
10
0
Order By: Relevance
“…Determining k value is a very important and key issue in this algorithm, and wrong estimation of k value leads to error in defining neighborhood. kNN algorithm is a method for locating mutual decision region among the points, and the equation between them is defined based on similarity criterion (Abbas & Shoukry, 2012, Brito et al, 1997, Hu & Bhatnagar,. 2012, Sardana & Bhatnagar, 2014.…”
Section: Review Of Related Literaturementioning
confidence: 99%
“…Determining k value is a very important and key issue in this algorithm, and wrong estimation of k value leads to error in defining neighborhood. kNN algorithm is a method for locating mutual decision region among the points, and the equation between them is defined based on similarity criterion (Abbas & Shoukry, 2012, Brito et al, 1997, Hu & Bhatnagar,. 2012, Sardana & Bhatnagar, 2014.…”
Section: Review Of Related Literaturementioning
confidence: 99%
“…Therefore, to reduce the effect of non-text candidates, we propose Mutual Nearest Neighbor (MNN) clustering for grouping the text candidates that share common properties [32]. It is noted that character components in a text line usually share uniform color, size, and distance.…”
Section: Mutual Nearest Neighbor Clustering For Seed Cluster Detecmentioning
confidence: 99%
“…Typical methods, such as CURE [6], Canopies[S] and CMune [7], divides the input dataset into small subsets to improve the performance. But the cluster ing performance is greatly dependent on the choice of different parameters.…”
Section: A Clustering For Large Datasetsmentioning
confidence: 99%
“…To further reduce the time complexity of the hierarchical clustering, many methods partition the large input dataset into small subsets [5], [6], [7]. Parallel and distributed computing frameworks have also been used to resolve such expensive computa tion of clustering algorithms for large datasets [8], [9].…”
Section: Introductionmentioning
confidence: 99%