Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/600
|View full text |Cite
|
Sign up to set email alerts
|

Feature-level Deeper Self-Attention Network for Sequential Recommendation

Abstract: Sequential recommendation, which aims to recommend next item that the user will likely interact in a near future, has become essential in various Internet applications. Existing methods usually consider the transition patterns between items, but ignore the transition patterns between features of items. We argue that only the item-level sequences cannot reveal the full sequential patterns, while explicit and implicit feature-level sequences can help extract the full sequential patterns. In this paper, we propos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
95
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 218 publications
(95 citation statements)
references
References 8 publications
0
95
0
Order By: Relevance
“…Attribute-aware sequential models such as TransFM [16] and FDSA [29] leverage the contextual features to improve the sequential recommender models, in which these features are treated as auxiliary information to enhance the representation of items or sequences. In our S 3 -Rec, the L AAP loss and L M AP loss aim to fuse attribute with items or sequential contexts, which is able to achieve the same effect as previous methods [16,29]. Besides, the pre-trained data representations can be also applied to improve existing methods.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Attribute-aware sequential models such as TransFM [16] and FDSA [29] leverage the contextual features to improve the sequential recommender models, in which these features are treated as auxiliary information to enhance the representation of items or sequences. In our S 3 -Rec, the L AAP loss and L M AP loss aim to fuse attribute with items or sequential contexts, which is able to achieve the same effect as previous methods [16,29]. Besides, the pre-trained data representations can be also applied to improve existing methods.…”
Section: Discussionmentioning
confidence: 99%
“…(10) SASRec F is our extension of SASRec, which concatenates the representations of item and attribute as the input to the model. (11) FDSA [29] constructs a feature sequence and uses a featurelevel self-attention block to model the feature transition patterns. This is the state-of-the-art model in sequential recommendation.…”
Section: Baseline Modelsmentioning
confidence: 99%
“…For instance, Shani et.al [17] examine the relation between a pair of items and then predict the probability of the next set of items for interaction. Attention mechanism is also applied in SRSs, FDSA for instance, use this technique to utilize transition patterns between features of items [18].…”
Section: A General Recommender Systemsmentioning
confidence: 99%
“…Unlike the original LSTM model, we modified the LSTM by introducing the attentive embedding of item's image I uv into the generation of words. Suppose the word list of user u commenting on item i is w uv = {w 1 uv , w 2 uv , . .…”
Section: Review-driven Visual Attention Modelmentioning
confidence: 99%
“…Nowadays, when buying fashion products online, the user's decisions are primarily affected by the appearance of products [1]. However, the invisible features that the user cannot observe from the image, such as the material and quality of the clothes, also affect the user's decisions.…”
Section: Introductionmentioning
confidence: 99%