This paper presents a two-stage approach that is effective for performing fast clustering. First, a competitive neural network (CNN) that can harmonize mean squared error and information entropy criteria is employed to exploit the substructure in the input data by identifying the local density centers. A Gravitation neural network (GNN) then takes the locations of these centers as initial weight vectors and undergoes an unsupervised update process to group the centers into clusters. Each node (called gravi-node) in the GNN is associated with a finite attraction radius and would be attracted to a nearby centroid simultaneously during the update process, creating the Gravitation-like behavior without incurring complicated computations. This update process iterates until convergence and the converged centroid corresponds to a cluster. Compared to other clustering methods, the proposed clustering scheme is free of initialization problem and does not need to pre-specify the number of clusters. The two-stage approach is computationally efficient and has great flexibility in implementation. A fully parallel hardware implementation is very possible.
Abstract-This paper optimizes the performance of the GCS model [1] in learning topology and vector quantization. Each node in GCS isattached with a resource counter. During the competitive learning process, the counter of the best-matching node is increased by a defined resource measure after each input presentation, and then all resource counters are decayed by a factor . We show that the summation of all resource counters conserves. This conservation principle provides useful clues for exploring important characteristics of GCS, which in turn provide an insight into how the GCS can be optimized.In the context of information entropy, we show that performance of GCS in learning topology and vector quantization can be optimized by using = 0 incorporated with a threshold-free node-removal scheme, regardless of input data being stationary or nonstationary. The meaning of optimization is twofold: 1) for learning topology, the information entropy is maximized in terms of equiprobable criterion and 2) for learning vector quantization, the mse is minimized in terms of equi-error criterion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.