2020
DOI: 10.1109/tkde.2020.3016732
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative List-and-Pairwise Filtering from Implicit Feedback

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 51 publications
1
5
0
Order By: Relevance
“…This is because CLiMF focuses on the first relevant item of recommendation lists. This situation also appears in [18,50]. Meanwhile, Deep-SetRank outperforms the pairwise learning and listwise learning method, this is also consistent with the results in Deep-SetRank [38].…”
Section: Experiments 51 Experimental Setupsupporting
confidence: 85%
See 1 more Smart Citation
“…This is because CLiMF focuses on the first relevant item of recommendation lists. This situation also appears in [18,50]. Meanwhile, Deep-SetRank outperforms the pairwise learning and listwise learning method, this is also consistent with the results in Deep-SetRank [38].…”
Section: Experiments 51 Experimental Setupsupporting
confidence: 85%
“…SetRank provides a new research perspective for listwise learning for implicit feedback, and achieves state-of-the-art ranking results. Another direction is directly maximizing the ranking metrics [1,21,33,50]. Since most of the ranking metrics are not differentiable, existing models approximate the ranking metrics and optimize a smoothed version, such as deriving the lower bound.…”
Section: Related Workmentioning
confidence: 99%
“…From the view of technology, collaborative filtering has been a widely used technique in recommender systems. It generates recommendations by leveraging the user-item interactions derived from historical data [44]. At the same time, the collaborative activity requires a structured and open environment where individuals can actively exchange and elaborate their ideas to achieve a high-quality problem-solving solution.…”
Section: Reciprocal Preference Evaluation Methodsmentioning
confidence: 99%
“…We use AUC (Area Under ROC Curve) and Logloss as the evaluation metrics [2,12,17,22,26], which are widely used to assess the performance of a CTR model.…”
Section: Evaluation Metricsmentioning
confidence: 99%