Proceedings of the 2018 ACM SIGIR International Conference on Theory of Information Retrieval 2018
DOI: 10.1145/3234944.3234956
|View full text |Cite
|
Sign up to set email alerts
|

Attentive Contextual Denoising Autoencoder for Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(24 citation statements)
references
References 23 publications
0
24
0
Order By: Relevance
“…This is a different strategy comparing to the interaction-shared layer commonly used by a considerable number of DNN-based recommenders. Jhamb et al (2018) propose a DAE architecture to encode contextual information via an attention mechanism. The Attentive Contextual Denoising Autoencoder (ACDA) adopts context weights for users and items for further incorporating them to user rating history to learn a unified latent representation suitable for recommendation ranking tasks.…”
Section: Synthesis Of Main Primary Studiesmentioning
confidence: 99%
“…This is a different strategy comparing to the interaction-shared layer commonly used by a considerable number of DNN-based recommenders. Jhamb et al (2018) propose a DAE architecture to encode contextual information via an attention mechanism. The Attentive Contextual Denoising Autoencoder (ACDA) adopts context weights for users and items for further incorporating them to user rating history to learn a unified latent representation suitable for recommendation ranking tasks.…”
Section: Synthesis Of Main Primary Studiesmentioning
confidence: 99%
“…First, the sigmoid function constrains the ability of each neuron to be in (0,1), which may reduce the efficiency of the models; and it is obvious that it tends to experience saturation, where neurons discontinue the learning process where the output is close to either 1 or 0. Second, tanh is a more enhanced candidate and has been extensively used [12,48,49]. It can only reduce the Sigmoid problem to some level; hence it is a rescaled variant of Sigmoid (tanh(x=2) = 25-007(x) -1).…”
Section: B Multi-layer Perceptronmentioning
confidence: 99%
“…DNN techniques have already been effectively implemented and have accomplished favorable results in several research areas, such as Computer Vision (CV), speech recognition, and Natural Language Processing (NLP) [10], [11]. Research findings have shown that the artificial neural network has a strong ability to learn latent and N-dimensional characteristics from homogeneous and heterogeneous information and to achieve accurate results [12], [13]. Among them, the Embedding based (EB) method and Multi-Layer Perceptron (MLP) have been extensively applied in Recommender Systems lately.…”
Section: Introductionmentioning
confidence: 99%
“…Attention has also been successfully applied in different recommendation tasks [38]- [43]. For example, MPCN [40] is a multi-pointer co-attention network that takes user and item reviews as input, and then extracts the most informative reviews that contribute more in predictions.…”
Section: Attention Mechanismmentioning
confidence: 99%