Advances in Web Mining and Web Usage Analysis
DOI: 10.1007/978-3-540-77485-3_9
|View full text |Cite
|
Sign up to set email alerts
|

Towards a Scalable kNN CF Algorithm: Exploring Effective Applications of Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 16 publications
0
10
0
Order By: Relevance
“…x = (weekday?, speed, flow, occupancy, visibility) , (5) and the target value y is the speed data in the next moment. The feature weekday?…”
Section: A Gaussian Process Regressionmentioning
confidence: 99%
See 1 more Smart Citation
“…x = (weekday?, speed, flow, occupancy, visibility) , (5) and the target value y is the speed data in the next moment. The feature weekday?…”
Section: A Gaussian Process Regressionmentioning
confidence: 99%
“…Large-scale Learning via kNN and Gaussian Process k-nearest neighbors is a simple non-parametric algorithm which can solve both classification and regression problems. Apart from classification and regression, searching for nearest neighbors is the key procedure of many useful algorithms such as recommendation [5], dimensionality reduction [6], computer networking [7], and so on. In the most naïve implementation, kNN, as a lazy approach needs O(nD) to search for closest neighbors given a new data for prediction where n is the number of data points and D is the dimensionality.…”
Section: Past Workmentioning
confidence: 99%
“…The basic user-based collaborative filtering (uCF) formula as described in [2] [4] [5] uses PCC to predict the rating of user, u, on item, m, as follows:…”
Section: Related Workmentioning
confidence: 99%
“…Ideally, partitioning will improve the quality of collaborative filtering predictions and increase the scalability of collaborative filtering systems. For example, Rashid et al [13] present a CLUSTKNN that is well suitable for large datasets. They use a clustering model to compress the data first, and then use a simple nearest neighbor-based approach to generate recommendation.…”
Section: Introductionmentioning
confidence: 98%