2015
DOI: 10.1016/j.advengsoft.2015.05.003
|View full text |Cite
|
Sign up to set email alerts
|

Extending parallelization of the self-organizing map by combining data and network partitioned methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…In [29], a SOM implementation that combines both data and model parallelism is described. This implementation pushes the parallelization capability of the batch training algorithm to the extreme, as not only the training set is split in chunks to be processed independently by copies of the map (data parallelism) but each copy is also partitioned at unit weightslevel, so that a separated GPU thread handles the updates of a single dimension of each neuron.…”
Section: Related Workmentioning
confidence: 99%
“…In [29], a SOM implementation that combines both data and model parallelism is described. This implementation pushes the parallelization capability of the batch training algorithm to the extreme, as not only the training set is split in chunks to be processed independently by copies of the map (data parallelism) but each copy is also partitioned at unit weightslevel, so that a separated GPU thread handles the updates of a single dimension of each neuron.…”
Section: Related Workmentioning
confidence: 99%
“…Another parallel SOM architecture [10,11] were implemented by dividing the calculation into two kernels, the first kernel calculates the Euclidean distance and finds the BMU and the second kernel updates the neighbor weights. Yet another architecture works were using one kernel to calculate Euclidean distance and find BMU, while neighbor weight update was performed on the CPU [12].…”
Section: Previous Researchmentioning
confidence: 99%
“…Yet another architecture works were using one kernel to calculate Euclidean distance and find BMU, while neighbor weight update was performed on the CPU [12]. Experiments to reduce computational time are also conducted with a combination of common methods used in SOM, such as network partition and data partition methods that are run in parallel [11]. Another method that is also utilized is parallel reduction as a solution to reduce computational time [13].…”
Section: Previous Researchmentioning
confidence: 99%