Proceedings of the 7th ACM Conference on Recommender Systems 2013
DOI: 10.1145/2507157.2507189
|View full text |Cite
|
Sign up to set email alerts
|

Efficient top-n recommendation for very large scale binary rated datasets

Abstract: We present a simple and scalable algorithm for top-N recommen-\ud dation able to deal with very large datasets and (binary rated) im-\ud plicit feedback. We focus on memory-based collaborative filtering\ud algorithms similar to the well known neighboor based technique\ud for explicit feedback. The major difference, that makes the algo-\ud rithm particularly scalable, is that it uses positive feedback only\ud and no explicit computation of the complete (user-by-user or item-\ud by-item) similarity matrix needs … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

7
63
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 89 publications
(70 citation statements)
references
References 11 publications
7
63
0
Order By: Relevance
“…It can be seen that UserKNN and ARM outperform MF-based methods on all metrics, which is especially prominent in case of D2, where BPRMF has performance comparable to that of the most popular books recommender. This finding conforms to those reported in [4], [3].…”
Section: Resultssupporting
confidence: 94%
See 2 more Smart Citations
“…It can be seen that UserKNN and ARM outperform MF-based methods on all metrics, which is especially prominent in case of D2, where BPRMF has performance comparable to that of the most popular books recommender. This finding conforms to those reported in [4], [3].…”
Section: Resultssupporting
confidence: 94%
“…For a given customer, recommendations can be computed either from the neighborhood of similar customers (user-based KNN), or from the neighborhood of products similar to those he or she has bought (item-based KNN). Although the latter approach has been favored in RS literature, we found user-based KNN to be more effective on both our datasets, in line with the results of [4], [16].…”
Section: Memory-based Cf (K-nearest Neighbors)supporting
confidence: 89%
See 1 more Smart Citation
“…It has been reported that the average order size for Amazon.com is 1.5 and 2.3 for Walmart.com. 1 Using the transaction data of Walmart.com, we found over 1/3 of orders contain two or more distinct items as shown in Figure 1.…”
Section: Introductionmentioning
confidence: 98%
“…All Pairs Similarity Search (APSS) [9], which identifies similar objects in a dataset, is used in many applications including collaborative filtering based on user interests or item similarity [1], search query suggestions [22], web mirrors and plagiarism recognition [23], coalition detection for advertisement frauds [20], spam detection [11,17], clustering [7], and near duplicates detection [16]. The complexity of naïve APSS can be quadratic to the dataset size.…”
Section: Introductionmentioning
confidence: 99%