1993
DOI: 10.1109/72.238318
|View full text |Cite
|
Sign up to set email alerts
|

Rival penalized competitive learning for clustering analysis, RBF net, and curve detection

Abstract: It is shown that frequency sensitive competitive learning (FSCL), one version of the recently improved competitive learning (CL) algorithms, significantly deteriorates in performance when the number of units is inappropriately selected. An algorithm called rival penalized competitive learning (RPCL) is proposed. In this algorithm, not only is the winner unit modified to adapt to the input for each input, but its rival (the 2nd winner) is delearned by a smaller learning rate. RPCL can be regarded as an unsuperv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
245
0
6

Year Published

2002
2002
2017
2017

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 552 publications
(252 citation statements)
references
References 22 publications
1
245
0
6
Order By: Relevance
“…Also, it will include the existing RPCL [9] and its Type A variant [8] as its special cases, but meanwhile providing a theoretical guidance to choose their awkward de-learning rate. We will go into the details elsewhere because of the space limitation.…”
Section: Rival Penalized Em Algorithmmentioning
confidence: 99%
“…Also, it will include the existing RPCL [9] and its Type A variant [8] as its special cases, but meanwhile providing a theoretical guidance to choose their awkward de-learning rate. We will go into the details elsewhere because of the space limitation.…”
Section: Rival Penalized Em Algorithmmentioning
confidence: 99%
“…Since both FSCL and FFSCL use non-Euclidean distance to determine the winner, they may lead to the problem of shared clusters in the sense that a number of prototypes may be updated into the same cluster during the learning process. This problem was considered by Xu et al in their rival penalized competitive learning (RPCL) algorithm [29]. The basic idea in RPCL is that for each input pattern, not only the weight of the frequency-sensitive winner is learned to shift toward the input pattern, but also the weight of its rival (the 2nd winner) is delearned by a smaller learning rate.…”
Section: Introductionmentioning
confidence: 99%
“…Although RPCL is applicable to some cases that the number of prototypes are larger than that of clusters, it is unable to deal with the situation that the number of prototypes is less than the actual number of clusters. To avoid this problem, Xu et al suggested to use a large number of prototypes initially [29]. However, it is difficult in most cases to choose a reasonably large number because of the lack of prior knowledge in the data set.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…(27) at D p (X) = D q (X), i.e., the following Bayesian Δπ(X, Y ), updating reverses the direction of the EM learning and actually becomes de-learning. In other words, the BYY harmony learning shares a mechanism similar to RPCL learning [64][65][66].…”
Section: Ying-yang Best Harmony Principlementioning
confidence: 99%