2020
DOI: 10.1109/tkde.2019.2913394
|View full text |Cite
|
Sign up to set email alerts
|

A Hierarchical Attention Model for Social Contextual Image Recommendation

Abstract: Image based social networks are among the most popular social networking services in recent years. With tremendous images uploaded everyday, understanding users' preferences on user-generated images and making recommendations have become an urgent need. In fact, many hybrid models have been proposed to fuse various kinds of side information (e.g., image visual representation, social network) and user-item historical behavior for enhancing recommendation performance. However, due to the unique characteristics o… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 76 publications
(27 citation statements)
references
References 50 publications
(68 reference statements)
0
26
0
1
Order By: Relevance
“…Along this line, methods for modeling text pairs [23,42] can be applied to learn the similarity of video-exercise pairs based on their textual materials. However, the visual information can be used to enhance the sentence semantic understanding [46] and the recommendation performance [38]. Therefore, in VENet, we exploit the visual data as supplement information to the closed captions to accurately understand and represent the video.…”
Section: Similarity Measurement In Educationmentioning
confidence: 99%
See 1 more Smart Citation
“…Along this line, methods for modeling text pairs [23,42] can be applied to learn the similarity of video-exercise pairs based on their textual materials. However, the visual information can be used to enhance the sentence semantic understanding [46] and the recommendation performance [38]. Therefore, in VENet, we exploit the visual data as supplement information to the closed captions to accurately understand and represent the video.…”
Section: Similarity Measurement In Educationmentioning
confidence: 99%
“…For example, YouEDU [1] automatically recommends related video snippets for the forum posts based on the cosine similarity between the closed captions and the post description. However, the visual information can be used to enhance the sentence semantic understanding [46] and the recommendation performance [38]. Besides, most of them are heavily dependent on the scarce labeled data on segments.…”
Section: Introductionmentioning
confidence: 99%
“…Given the rating matrix R and the corresponding images of items, we identify that a users' preferences are influenced by three aspects. Specifically, the ratings matrix R from each user's feedback is well recognized as a latent and an important factor in the image recommendation (He and McAuley 2016;Wu et al 2019). When a user sees a new image, it's natural to pay attention to the style and content of the image.…”
Section: Prediction Functionmentioning
confidence: 99%
“…Visual signals are playing more and more important roles for users' daily life (Wu et al 2019). Especially in modern fast-paced society, users are reluctant to spend much time to read literal content, but prefer to browser visual images (Gelli et al 2018).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation