2014
DOI: 10.1007/978-3-319-08786-3_41
|View full text |Cite
|
Sign up to set email alerts
|

Fast Incremental Matrix Factorization for Recommendation with Positive-Only Feedback

Abstract: Traditional Collaborative Filtering algorithms for recommendation are designed for stationary data. Likewise, conventional evaluation methodologies are only applicable in offline experiments, where data and models are static. However, in real world systems, user feedback is continuously being generated, at unpredictable rates. One way to deal with this data stream is to perform online model updates as new data points become available. This requires algorithms able to process data at least as fast as it is gene… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
70
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 71 publications
(70 citation statements)
references
References 15 publications
0
70
0
Order By: Relevance
“…The results show that our algorithm outperforms the off-line matrix factorization algorithm proposed by Barragáns et al (2010). Regarding the on-line incremental matrix factorization algorithm of Vinagre et al (2014), our algorithm shows improved results in the positive feedback scenario and in the rating feedback scenario with the ML 100k; whereas in the case of the rating feedback scenario with the ML 1M, our individual model updating displays a 3 % increase in prediction errors and a 5 % decrease in T Recall@10.…”
Section: Discussionmentioning
confidence: 86%
See 2 more Smart Citations
“…The results show that our algorithm outperforms the off-line matrix factorization algorithm proposed by Barragáns et al (2010). Regarding the on-line incremental matrix factorization algorithm of Vinagre et al (2014), our algorithm shows improved results in the positive feedback scenario and in the rating feedback scenario with the ML 100k; whereas in the case of the rating feedback scenario with the ML 1M, our individual model updating displays a 3 % increase in prediction errors and a 5 % decrease in T Recall@10.…”
Section: Discussionmentioning
confidence: 86%
“…The matrix factorization and the learning techniques are SVD and SGD, respectively. The prequential protocol verifies, every time a new rating event occurs, if the rated item would have been recommended to that viewer and, if yes, counts as a hit [11]. In 2015, these authors included a rating-and-recency-based scheme to perform negative preference imputation [12].…”
Section: On-linementioning
confidence: 99%
See 1 more Smart Citation
“…GRAPE parallelizes CF by adopting SGD [23] as PEval, and the incremental algorithm ISGD of [34] as IncEval, using master P0 to synchronize the shared factor vectors u.f and p.f .…”
Section: Collaborative Filtering (Cf)mentioning
confidence: 99%
“…(2) IncEval is the incremental algorithm ISGD of [34]. Upon receiving message Mi at worker Pi, it computes Fi ⊕ Mi by treating Mj as updates to factor vectors of nodes in Fi.I, and only modifies affected factor vectors as in PEval based solely on new observations.…”
Section: Collaborative Filtering (Cf)mentioning
confidence: 99%