2002
DOI: 10.1016/s0925-2312(02)00427-7
|View full text |Cite
|
Sign up to set email alerts
|

A parallel general implementation of Kohonen's self-organizing map algorithm: performance and scalability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2006
2006
2016
2016

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…Ozdzynski et al implemented a network-partitioned algorithm [28] on the CPU and tested three different update kernels for the maps using from one to eight processors cores on varying sized maps from small (2 Â 4), medium (20 Â 30), and large (30 Â 40) and found that the time taken to train on smaller maps increased when increasing the number of parallel computing threads. The larger the maps used, the smaller the parallelization training time required [29].…”
Section: Som Parallelizationmentioning
confidence: 99%
“…Ozdzynski et al implemented a network-partitioned algorithm [28] on the CPU and tested three different update kernels for the maps using from one to eight processors cores on varying sized maps from small (2 Â 4), medium (20 Â 30), and large (30 Â 40) and found that the time taken to train on smaller maps increased when increasing the number of parallel computing threads. The larger the maps used, the smaller the parallelization training time required [29].…”
Section: Som Parallelizationmentioning
confidence: 99%
“…(a) the network partitioning schemes are more suitable for a multiprocessor environment where the communication overhead may be greatly reduced thanks to the high speed of the common bus shared by the processors, as shown in [16,30,31]; (b) the data partitioning schemes become appropriate when the parallel SOM is executed in loosely coupled systems, as shown in [32] and in [33], although the proposed implementations exploit only 60%-80% of the overall speed-up due to the increase of the communication overhead at the increase of the number of CEs [34].…”
Section: Parallel Somsmentioning
confidence: 99%