2002
DOI: 10.1109/72.991422
|View full text |Cite
|
Sign up to set email alerts
|

Self-splitting competitive learning: a new on-line clustering paradigm

Abstract: Abstract-Clustering in the neural-network literature is generally based on the competitive learning paradigm. This paper addresses two major issues associated with conventional competitive learning, namely, sensitivity to initialization and difficulty in determining the number of prototypes. In general, selecting the appropriate number of prototypes is a difficult task, as we do not usually know the number of clusters in the input data a priori. It is therefore desirable to develop an algorithm that has no dep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2004
2004
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 84 publications
(3 citation statements)
references
References 42 publications
(56 reference statements)
0
3
0
Order By: Relevance
“…Many clustering models have been proposed in the literature such as partitioning [4], hierarchical [5] and spectralbased models [6], which have applications in many fields including image segmentation [7], [8], social networks analysis and community discovery [9], [10], recommender systems [11]- [13] and so on [14], [15]. Clustering in the artificial neural networks (ANNs) literature is usually based on a competitive learning (CL) paradigm [16]- [18] where codebook weight vectors (prototypes) compete in order to elect the best matching unit (BMU), i.e., a neuron unit whose weight vector has the minimum distance to an input vector. Afterward, the selected prototype is updated to get closer to the input vector.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Many clustering models have been proposed in the literature such as partitioning [4], hierarchical [5] and spectralbased models [6], which have applications in many fields including image segmentation [7], [8], social networks analysis and community discovery [9], [10], recommender systems [11]- [13] and so on [14], [15]. Clustering in the artificial neural networks (ANNs) literature is usually based on a competitive learning (CL) paradigm [16]- [18] where codebook weight vectors (prototypes) compete in order to elect the best matching unit (BMU), i.e., a neuron unit whose weight vector has the minimum distance to an input vector. Afterward, the selected prototype is updated to get closer to the input vector.…”
Section: Introductionmentioning
confidence: 99%
“…To address the limitations related to (1) sensitivity to cluster centers initialization and selection of the number of clusters k (i.e., requiring multiple restarts) and ( 2) optimization of non-convex clustering quality measures, we represent the problem of clustering multiple transaction databases as a quasi-convex optimization problem solvable without specifying the number of clusters beforehand. In contrast to competitive learning paradigm [16]- [18], we have adopted a gradient-based learning approach [27] with back-propagation [28] to minimize a clustering quasi-convex loss function L(θ) which guarantees convergence to the global minimum. We also discover the number of clusters (denoted by f θ (D)) in the input space by incorporating f θ (D) into our objective.…”
Section: Introductionmentioning
confidence: 99%
“…To tackle the latter problems, it is suggested to keep the transactional data stored locally and only forward the local patterns mined at each branch database to a central site where they will be clustered into disjoint cohesive pattern-base groups for knowledge discovery. In fact, analyzing the local patterns present in each individual cluster of the multiple databases (MDB) enhances the quality of aggregating novel relevant patterns, and also facilitates the parallel maintenance of the obtained database clusters.Various clustering algorithms and models have been introduced in the literature, namely spectral-based models [ 2 ], hierarchical [ 3 ], partitioning [ 4 ], competitive learning-based models [ 5 , 6 , 7 ] and artificial neural networks (ANNs) based clustering [ 8 , 9 , 10 ]. Additionally, clustering could be applied in many domains [ 11 , 12 ] including community discovery in social networks [ 13 , 14 ], image segmentation [ 15 , 16 ] and recommendation systems [ 17 , 18 , 19 ].…”
Section: Introductionmentioning
confidence: 99%