2015
DOI: 10.1145/2668133
|View full text |Cite
|
Sign up to set email alerts
|

A Fast Parallel Stochastic Gradient Method for Matrix Factorization in Shared Memory Systems

Abstract: Matrix factorization is known to be an effective method for recommender systems that are given only the ratings from users to items. Currently, stochastic gradient (SG) method is one of the most popular algorithms for matrix factorization. However, as a sequential approach, SG is difficult to be parallelized for handling web-scale problems. In this article, we develop a fast parallel SG method, FPSG, for shared memory systems. By dramatically reducing the cache-miss rate and carefully addressing the load balan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
62
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 84 publications
(62 citation statements)
references
References 13 publications
0
62
0
Order By: Relevance
“…We consider observed entries with ratings 4 as positive. 4 For some problems, training and test sets are available, but the test sets are too small. Therefore, other than delicious, we merge training and test sets of every problem first, and then do a 9-to-1 split to obtain training/test sets for our experiments.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We consider observed entries with ratings 4 as positive. 4 For some problems, training and test sets are available, but the test sets are too small. Therefore, other than delicious, we merge training and test sets of every problem first, and then do a 9-to-1 split to obtain training/test sets for our experiments.…”
Section: Resultsmentioning
confidence: 99%
“…However, such settings do not guarantee the complexity reduction like what we achieved for ALS and CD. Further, existing methods to parallelize SG for MF such as [4,8] may become not applicable because A is split into blocks. In contrast, ALS and CD under our new settings for one-class MF can be easily parallelized.…”
Section: Stochastic Gradient (Sg)mentioning
confidence: 99%
“…Various ranges of the entries in R and bi are specified in our experiments (see Table 1). The number of steps T is set to 10 and the rating logs are then generated according to equations (7) and (11).…”
Section: Experiments On the Synthetic Datasetmentioning
confidence: 99%
“…In Table 1, we report the RMSE for both the original MF (implemented by the LIBMF library [7,8]) and our method for various parameter settings. The parameters that we choose for MF are the learning rate α= 0.01, the regulator parameter λ = 0.02, and the number of factors D = 30.…”
Section: Experiments On the Synthetic Datasetmentioning
confidence: 99%
“…Matrix Factorization and its application to personalized recommendation demonstrated the effectiveness of directly modelling all the dimensions simultaneously in a unified framework. These among other works presupposes that, tensor decomposition models performed well in terms of prediction efficiency and effectiveness compared to the various matrix factorization algorithms, in particular application to massive data processing [24]- [26]. However, the numerous literature concerning the subject.…”
Section: Introductionmentioning
confidence: 99%