2011
DOI: 10.7763/ijmlc.2011.v1.25
|View full text |Cite
|
Sign up to set email alerts
|

RSVD-based Dimensionality Reduction for RecommenderSystems

Abstract: We investigate dimensionality reduction methods from the perspective of their ability to produce a low-rank customer-product matrix representation. We analyze the results of using collaborative filtering based on SVD, RI, Reflective Random Indexing (RRI) and Randomized Singular Value Decomposition (RSVD) from the perspective of selected algebraic (i.e. application-independent) properties. We show that the Frobenius-norm optimality of SVD does not correspond to the optimal recommendation accuracy, when measured… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
4
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 14 publications
1
4
0
Order By: Relevance
“…Figure 7 presents the performance of the recommender algorithms as compared in the investigated scenarios (i.e., in scenarios S1-4). It may be concluded that, as it was already shown in [11], the RSVD method outperforms other methods (i.e., PureSVD and RRI) when the standard input data representation is used. As far as the S1 scenario is concerned, that is, the one with the standard data representation based on the actor-object coincidence matrix single relation, it may be seen that, in general, the decomposition-based methods (i.e., PureSVD and RSVD) achieve comparable recommendation quality and that, in general, these methods perform better than RRI (for various values of the training ratio).…”
Section: Methodssupporting
confidence: 56%
See 1 more Smart Citation
“…Figure 7 presents the performance of the recommender algorithms as compared in the investigated scenarios (i.e., in scenarios S1-4). It may be concluded that, as it was already shown in [11], the RSVD method outperforms other methods (i.e., PureSVD and RRI) when the standard input data representation is used. As far as the S1 scenario is concerned, that is, the one with the standard data representation based on the actor-object coincidence matrix single relation, it may be seen that, in general, the decomposition-based methods (i.e., PureSVD and RSVD) achieve comparable recommendation quality and that, in general, these methods perform better than RRI (for various values of the training ratio).…”
Section: Methodssupporting
confidence: 56%
“…Firstly, we have used the algorithm based on reflexive random indexing [10]. Secondly, we have used two types of algorithms that are based on the singular value decomposition: a traditional implementation of the method (PureSVD), in which actor vectors are represented as combinations of object vectors without any specific parameterization, and an implementation of the randomized singular value decomposition (RSVD) [11], which is a combination of the reflexive random indexing and SVD. We have chosen so since SVD-based methods have been long considered to be the most efficient recommender engines in real world settings [12][13][14][15].…”
Section: Related Workmentioning
confidence: 99%
“…HPMPP has been compared with three widely-referenced methods, namely, pure Singular Value Decomposition (SVD) [13], Reflective Random Indexing (RRI) [4], Randomized SVD-RI (RSVD-RI) [3]. For both RRI and RSVD-RI, the same configuration of the random indexing function has been used, i.e., a configuration of the seed number s = 2 and the dimension number d = 500, which was chosen as the optimized settings for the MovieLens dataset according to the test results described in [17].…”
Section: Evaluation Methodologymentioning
confidence: 99%
“…The most time consuming operation is the full eigenvalue decomposition of matrix A which is a square matrix of the size equal to the summarized number of elements and facts n + m. It has to be stressed, that A is a much bigger matrix than the user-item matrices being decomposed in the case of using the SVD method. Thus, the computational complexity of the HPMPP method depends mostly on the complexity of eigendecomposition of matrix A, which is in the worst case equal to O((n + m) 3 ). Despite the fact that we use the optimized matrix eigendecomposition algorithm from Intel Math Kernel Library (Intel MKL), complexity of this operation is the main reason why the proposed method requires much more time than other compared methods.…”
Section: Computational Complexitymentioning
confidence: 99%
“…Dimensionality reduction techniques like Singular Value Decomposition have been explored in the past to derive a lower rank approximation of the ratings matrix and consequently reduce processing time. Sarwar et al [10] propose that the SVD-based approach to recommendations produced results that were better than a traditional collaborative filtering algorithm in case of a reasonably dense dataset.In [11] the authors investigate the utility of techniques such as SVD, Random Indexing(RI), Reflective Random Indexing (RRI) and Randomized Singular Value Decomposition (RSVD) for aiding collaborative filtering. They conclude that a combination of RRI and SVD delivers a better recommendation diversity, though SVD results in lesser prediction errors.…”
Section: Literature Review 21 Collaborative Filteringmentioning
confidence: 99%