2019
DOI: 10.1109/tkde.2018.2866041
|View full text |Cite
|
Sign up to set email alerts
|

Online Interactive Collaborative Filtering Using Multi-Armed Bandit with Dependent Arms

Abstract: Online interactive recommender systems strive to promptly suggest to consumers appropriate items (e.g., movies, news articles) according to the current context including both the consumer and item content information. However, such context information is o en unavailable in practice for the recommendation, where only the users' interaction data on items can be utilized. Moreover, the lack of interaction records, especially for new users and items, worsens the performance of recommendation further. To address t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 65 publications
(34 citation statements)
references
References 39 publications
0
33
0
Order By: Relevance
“…On the other hand, CoLin [50] assumes that the reward in bandit is generated through an additive model, indicating that friends' feedback on their recommendations can be passed via the network to explain the target user's feedback. Recently, researchers also tried to capture the arm dependency by organizing different arms into different clusters [36,48] and it is very similar to our studied problem. However, simply performing clustering on attributed networks may ignore the inherent node dependencies and may lead to suboptimal results in interactive anomaly discovery.…”
Section: Multi-armed Bandit Algorithmsmentioning
confidence: 96%
“…On the other hand, CoLin [50] assumes that the reward in bandit is generated through an additive model, indicating that friends' feedback on their recommendations can be passed via the network to explain the target user's feedback. Recently, researchers also tried to capture the arm dependency by organizing different arms into different clusters [36,48] and it is very similar to our studied problem. However, simply performing clustering on attributed networks may ignore the inherent node dependencies and may lead to suboptimal results in interactive anomaly discovery.…”
Section: Multi-armed Bandit Algorithmsmentioning
confidence: 96%
“…In the early studies of CRSs it was common to ask for users opinions on an item itself (Zhao et al, 2013;Wang et al, 2017). These approaches usually combine the features of static recommender systems such as CF with user interaction in real-time.…”
Section: Item Elicitationmentioning
confidence: 99%
“…In [28], the authors studied online collaborative filtering with dependent arms, where they combined particle filtering with the upper confidence bound (UCB) algorithm and Thompson sampling. The limitation of the work is that it does not consider adversarial effects.…”
Section: Related Workmentioning
confidence: 99%