Proceedings of the 24th ACM International on Conference on Information and Knowledge Management 2015
DOI: 10.1145/2806416.2806527
|View full text |Cite
|
Sign up to set email alerts
|

Deep Collaborative Filtering via Marginalized Denoising Auto-encoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
219
0
2

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 361 publications
(221 citation statements)
references
References 25 publications
0
219
0
2
Order By: Relevance
“…2) and deep autoencoeder is that the later has multiple layers of encoders and decoders. However, there is no significant gain in going deeper as was shown in [23,24]. In recent years, with the success of deep learning in almost all areas of applied machine learning, such techniques have been leveraged for collaborative filtering as well; see for instance [23 -26].…”
Section: Fig 2 Autoencodermentioning
confidence: 99%
“…2) and deep autoencoeder is that the later has multiple layers of encoders and decoders. However, there is no significant gain in going deeper as was shown in [23,24]. In recent years, with the success of deep learning in almost all areas of applied machine learning, such techniques have been leveraged for collaborative filtering as well; see for instance [23 -26].…”
Section: Fig 2 Autoencodermentioning
confidence: 99%
“…However, these models are typically simple neural network models and do not incorporate any content information. Recent work integrating deep learning with collaborative filtering mostly focuses on extracting content features from single modality such as texts [35][36][37] or images [38][39][40].…”
Section: Related Workmentioning
confidence: 99%
“…We rst obtain the high-level feature representations from CAE, and then integrated them into the AutoSVD and AutoSVD++ model. An alternative optimization approach, which optimizes CAE and AutoSVD (AutoSVD++) simultaneously, could also apply [6]. However, the later approach need to recompute all the item content feature vectors when a new item comes, while in the sequential situation, item feature representations only need to be computed once and stored for reuse.…”
Section: Optimizationmentioning
confidence: 99%
“…• mSDA-CF [6] , mSDA-CF is a model that combines PMF with marginalized denoising stacked auto-encoders.…”
Section: Overall Comparisonmentioning
confidence: 99%
See 1 more Smart Citation