2016
DOI: 10.1007/978-3-319-48740-3_20
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Factorization Machines for Top-N Context-Aware Recommendations

Abstract: Abstract. Context-aware Collaborative Filtering (CF) techniques such as Factorization Machines (FM) have been proven to yield high precision for rating prediction. However, the goal of recommender systems is often referred to as a top-N item recommendation task, and item ranking is a better formulation for the recommendation problem. In this paper, we present two collaborative rankers, namely, Ranking Factorization Machines (RankingFM) and Lambda Factorization Machines (LambdaFM), which optimize the FM model f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 25 publications
0
13
0
Order By: Relevance
“…Researchers attempted to fill the gap where relations between users and items in contextual-aware recommender systems are missing. In other words, some context-aware recommender systems require supervision, manual tuning, or use very strict assumptions [148] [149], or assume that different contexts have the same weights [150]. Yuan et al [148] proposed to optimize factorization machines (FM) for the item recommendation task based on implicit feedback (number of purchases, clicks, played songs, ..., etc.).…”
Section: B Contextual Awareness In Recommender Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…Researchers attempted to fill the gap where relations between users and items in contextual-aware recommender systems are missing. In other words, some context-aware recommender systems require supervision, manual tuning, or use very strict assumptions [148] [149], or assume that different contexts have the same weights [150]. Yuan et al [148] proposed to optimize factorization machines (FM) for the item recommendation task based on implicit feedback (number of purchases, clicks, played songs, ..., etc.).…”
Section: B Contextual Awareness In Recommender Systemsmentioning
confidence: 99%
“…In other words, some context-aware recommender systems require supervision, manual tuning, or use very strict assumptions [148] [149], or assume that different contexts have the same weights [150]. Yuan et al [148] proposed to optimize factorization machines (FM) for the item recommendation task based on implicit feedback (number of purchases, clicks, played songs, ..., etc.). This was done by (1) adopting a FM as ranking function to model the interaction between context information (i.e., the duration a song is played), then apply it to the learningto-rank (LtR) method by using pairwise cross-entropy (CE) loss, this procedure is called RankingFM; and (2) comparing the top-N recommendation performances of RankingFM by adapting the original lambda weighting function (Lamb-daRank) with two alternative sampling schemes, this process is called LambdaFM.…”
Section: B Contextual Awareness In Recommender Systemsmentioning
confidence: 99%
“…For instance, TFMAP [27] employs TF to capture the 3-way user-item-context relations, and learns it by maximizing Mean Average Precision (MAP); in the same light, CARS explores multiple objective functions to train a novel TF model. Ranking FM [18,31,32,10], on the other side, aims to exploit FM as the rating function to model the pairwise feature interaction, and to build the ranking algorithm by maximizing various ranking measures such as the Area Under the ROC Curve (AUC) and the Normalized Discount Cumulative Gain (NDCG).…”
Section: Feature-based Recommendationmentioning
confidence: 99%
“…Examples for implicit feedback are clicks, purchases, watched videos or played songs, which are much cheaper to obtain since a user does not have to express his taste explicitly. Implicit feedback is often one-class [22], i.e., only positive class is available, and thus algorithms optimized for multiple classes cannot be directly applied for implicit feedback data [31,32]. In this paper, we study the problem of optimizing item ranking with implicit feedback and feature information.…”
Section: Introductionmentioning
confidence: 99%
“…The other side of the intuition is that low ranked positive items contribute similarly as high ranked non-positive items during the training process, which would be provided with more details later. 3 Due to space limitations, we leave out more details and refer the interested readers to [29,19] for the technical part.…”
Section: Lambda Strategiesmentioning
confidence: 99%