2014 IEEE International Parallel &Amp; Distributed Processing Symposium Workshops 2014
DOI: 10.1109/ipdpsw.2014.192
|View full text |Cite
|
Sign up to set email alerts
|

SOM Clustering Using Spark-MapReduce

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…In [70], the authors designed clustering algorithms that can be used in MapReduce using Spark platform. Particularly, they focus on the practical and popular serial Self-organizing Map clustering algorithm (SOM).…”
Section: B2 Machine Learning Based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In [70], the authors designed clustering algorithms that can be used in MapReduce using Spark platform. Particularly, they focus on the practical and popular serial Self-organizing Map clustering algorithm (SOM).…”
Section: B2 Machine Learning Based Methodsmentioning
confidence: 99%
“…The new set is then used as an input to the algorithm for clustering. Manuscript to be reviewed Computer Science has also been proposed [90]. To tackle high dimensional data, subspace clustering was proposed by [91].…”
Section: C4 Scalable Methodsmentioning
confidence: 99%
“…In [16], [24], attempts at distributing the SOM training algorithm using the MapReduce framework are described. Whereas [24] relies on a pure Spark implementation to effectively scale on massive datasets, [16] also allows for accelerating map and reduce jobs on GPUs, by leveraging on the MapReduce-MPI [25] framework 6 .…”
Section: Related Workmentioning
confidence: 99%
“…RDD [3] of Spark [4] allows developers to perform large-scale calculations on the cluster based on memory, and has the capability of fault tolerance. In order to realize the clustering of massive data and improve the processing efficiency, many scholars do a lot of research for clustering algorithm based on Spark to improve the time efficiency of AP algorithm [5][6][7] .…”
Section: Introductionmentioning
confidence: 99%