2019
DOI: 10.1007/978-3-030-12981-1_13
|View full text |Cite
|
Sign up to set email alerts
|

GPU-accelerated Large-Scale Non-negative Matrix Factorization Using Spark

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…Sun et al realized large-scale NMF based on MapReduce in [32], and Liu et al also proposed a distributed NMF based on MapReduce for processing largescale web data using Hadoop streaming method [19]. In our previous work [33], we proposed a parallel NMF algorithm in Spark platform, which makes full use of the advantages of in-memory computation mode.…”
Section: Scientific Programmingmentioning
confidence: 99%
“…Sun et al realized large-scale NMF based on MapReduce in [32], and Liu et al also proposed a distributed NMF based on MapReduce for processing largescale web data using Hadoop streaming method [19]. In our previous work [33], we proposed a parallel NMF algorithm in Spark platform, which makes full use of the advantages of in-memory computation mode.…”
Section: Scientific Programmingmentioning
confidence: 99%
“…Recently, there has been growing interest in scaling tensor operations to bigger data and more processors in both the data mining/machine learning and the high performance computing communities. For sparse tensors, there have been parallelization efforts to compute CP decompositions on shared-memory platforms [34,51], distributed-memory platforms [24,26,38,50] and GPUs [40,41,52], and these approaches can be generalized to constrained problems [49].…”
Section: Related Workmentioning
confidence: 99%