Proceedings of International Conference on Neural Networks (ICNN'96)
DOI: 10.1109/icnn.1996.549022
|View full text |Cite
|
Sign up to set email alerts
|

On modifications of Kohonen's feature map algorithm for an efficient parallel implementation

Abstract: Two new variant,s of Kohoiieii's self-orgaiiizing feature maps based 011 batch processing are presented in t,liis work. T h e purpose is to make available a finer grain of parallelisni to be used in massively parallel syst,ems. Ordering aiitl coiivergeiice to asyrnpt,ot*ic values for I-D rnaps antl I-D continuous input antl weight spaces are proved for both variants. Simulations on uniform 2-D data using 1-D ant1 2-D imps as well as simulations on speech 12-D data using 2-D rnaps are also preseiited to bac.1; … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 3 publications
0
4
0
Order By: Relevance
“…gain satisfying (7), then M ( k T ) -+ M* almost surely in the Kushner and Clark sense as k -t +m, with M * given by (17). Moreover in the particular case of a uniform input probability distribution, with p T given by (21).…”
Section: Theorem 4 For the Batch Algorithm If A ( K T ) Is A Decreasmentioning
confidence: 95%
See 1 more Smart Citation
“…gain satisfying (7), then M ( k T ) -+ M* almost surely in the Kushner and Clark sense as k -t +m, with M * given by (17). Moreover in the particular case of a uniform input probability distribution, with p T given by (21).…”
Section: Theorem 4 For the Batch Algorithm If A ( K T ) Is A Decreasmentioning
confidence: 95%
“…,c(kT+ T -1 j. Inserting the expectation inside the summation, (19) yields and since cr(kT)T > 0 and independent of J , we are led back to the same system of equations as that for the original algorithm. Thus: we have and the batch equilibrium configurations are given by(17). We have therefore proved the following theorem for the batch algorithm.T H E O R E M3 For the batch algorithm using a Gaussian neighborhood function, with a width decreasing to zero with time, the asymptotic weight density is proportional to p2I3(().…”
mentioning
confidence: 91%
“…No adjustment of˛is necessary when the number of rules stabilizes. If x(t) cannot be clustered, a new rule, which is set as w new D x t will be created (Vassilas et al, 1996;Kohonen, 1998 Step 1 Determine the closest distance…”
Section: Self-learning Of Counterpropagation Fuzzy-neural Networkmentioning
confidence: 99%
“…One type of the parallel algorithms is to use the characteristics of the architecture. Many parallel SOM algorithms have been proposed on different architecture, such as transputer [l] [231, eight neighbor processor array [22], connection machine [201, parallel coprocessor [lq, single instruction multiple data machine [l 13, multiple instruction multiple data machine [26], systolic array [lo] [25], and recently PVM [2] E71. Another type of parallel SOM algorithm is to partition the data into several chunks such that these chunks can be executed by different processors in parallel.…”
Section: Introductionmentioning
confidence: 99%