2021
DOI: 10.1155/2021/8841133
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Filtering Recommendation Using Nonnegative Matrix Factorization in GPU-Accelerated Spark Platform

Abstract: Nonnegative matrix factorization (NMF) has been introduced as an efficient way to reduce the complexity of data compression and its capability of extracting highly interpretable parts from data sets, and it has also been applied to various fields, such as recommendations, image analysis, and text clustering. However, as the size of the matrix increases, the processing speed of nonnegative matrix factorization is very slow. To solve this problem, this paper proposes a parallel algorithm based on GPU for NMF in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 43 publications
(50 reference statements)
0
2
0
Order By: Relevance
“…Non-negative matrix factorization (NMF) is a matrix factorization method that decomposes a given positive matrix into two positive matrices [20][21]. It is able to represent objects as non-negative linear combinations where partial information is extracted from a large number of objects.…”
Section: Nmf-based Text Topic Decompositionmentioning
confidence: 99%
“…Non-negative matrix factorization (NMF) is a matrix factorization method that decomposes a given positive matrix into two positive matrices [20][21]. It is able to represent objects as non-negative linear combinations where partial information is extracted from a large number of objects.…”
Section: Nmf-based Text Topic Decompositionmentioning
confidence: 99%
“…Fig. 9: Results of Out of memory NMF benchmarks on Chicoma showing (a) NMF peak memory vs queue sizes for different k, and (b) NMF execution time vs queue sizes for different k.[32, 64, 128, 256, 512, 1024]. Smaller array H is cached on GPU memory, and large arrays A and W are stored on the host and batched to GPU as needed.…”
mentioning
confidence: 99%