Proceedings of the 13th ACM Conference on Recommender Systems 2019
DOI: 10.1145/3298689.3346999
|View full text |Cite
|
Sign up to set email alerts
|

Attribute-aware non-linear co-embeddings of graph features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(23 citation statements)
references
References 13 publications
0
23
0
Order By: Relevance
“…Other works, taking a different approach, i.e. [24]- [26] propose the use of deep learning models for learning latent features from side information of users and items. Recommendation models using deep learning techniques have become a particularly popular area of research in recent years, following their success in multiple application domains such as computer vision and natural language processing [27].…”
Section: Related Work a Collaborative Filtering Enriched With Simentioning
confidence: 99%
“…Other works, taking a different approach, i.e. [24]- [26] propose the use of deep learning models for learning latent features from side information of users and items. Recommendation models using deep learning techniques have become a particularly popular area of research in recent years, following their success in multiple application domains such as computer vision and natural language processing [27].…”
Section: Related Work a Collaborative Filtering Enriched With Simentioning
confidence: 99%
“…In addition to the three baselines, we also compared UserReg against multiple state-of-the-art rating prediction models. GraphRec [18] and IGMC [17] are the two most recently proposed CF models for rating prediction task using ML-100K and ML-1M. Based on [18,17], we found a richer set of rating prediction models that present state-of-the-art results, including (1) graph neural networks (GNNs) based ones: GRALS [23], GC-MC [24], sRGCNN [25] and…”
Section: On Movielensmentioning
confidence: 99%
“…For ML-1M, we used λ = 0.1 and β = 8 for the 90/10 train/test split, and λ = 0.1, β = 5 for the 50/50 split. The results are depicted in Table 2, where the best performance figures are shown in bold (results of recent models were taken from [14,15,17,18]).…”
Section: On Movielensmentioning
confidence: 99%
See 2 more Smart Citations