2022
DOI: 10.1016/j.eswa.2022.116669
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Armed Bandits in Recommendation Systems: A survey of the state-of-the-art and future directions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0
3

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(26 citation statements)
references
References 69 publications
0
23
0
3
Order By: Relevance
“…In [10], a framework of recommendation from the point of view of explainable recommendations is described. The interesting formulation of RSs as systems trying to solve a Multi-Armed Bandit problem is surveyed in [11]. Both [12] and [13] cover the usage of knowledge graphs in RSs.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In [10], a framework of recommendation from the point of view of explainable recommendations is described. The interesting formulation of RSs as systems trying to solve a Multi-Armed Bandit problem is surveyed in [11]. Both [12] and [13] cover the usage of knowledge graphs in RSs.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, our survey is organized from a more generic point of view, attempting to collate much of this information Survey Year Main focus [6] 2018 Traditional and hybrid collaborative filtering [8] 2018 Sequence-aware recommenders [15] 2018 Ontology-based RSs for e-learning services [7] 2019 Neural recommenders [16] 2019 Research article recommendation systems [17] 2019 Location recommendation for tourism [10] 2020 Explainable recommendations [18] 2020 Deep learning-based citation recommendation [19] 2020 Tourism destination recommenders [9] 2021 Session-based recommenders [3] 2021 Item recommendation in implicit settings [5] 2021 Neural recommenders with the accuracy goal [14] 2021 Graph learning based recommenders [20] 2021 Social tag-based recommenders [11] 2022 Multi-Armed Bandit based recommenders [21] 2022 Taxonomy of recommenders for research material into a single, foundational overview. We attempt to highlight how the field has evolved over the years, such as to give a realistic and up-to-date view of the recommendation landscape.…”
Section: Contributions Of This Surveymentioning
confidence: 99%
See 1 more Smart Citation
“…Our method also builds on a large literature applying online experiments to learn about uncertainties in marketing (Schwartz et al, 2017;Misra et al, 2019;Liberali and Ferecatu, Forthcoming;Ye et al, 2020;Gardete and Santos, 2020;Aramayo et al, Forthcoming), operations management (Bertsimas and Mersereau, 2007;Bernstein et al, 2019;Johari et al, 2021), and computer science (Li et al, 2010;Gomez-Uribe and Hunt, 2015;Wu, 2018;Zhou, 2015;Silva et al, 2022;Gangan et al, 2021). Specifically, we focus on Multiarmed Bandit, a classic reinforcement learning problem (Katehakis and Veinott, 1987).…”
Section: Multi-armed Banditsmentioning
confidence: 99%
“…We call this step the collaborative filtering with attributes (CF-A), and it solves problems i) and iii). Second, given the preferences inferred from CF-A, bandit algorithms are used to balance recommendations predicated upon what is known about user preferences in the current period with novel recommendations intended to explore their preferences to improve recommendations in future periods (Gangan et al, 2021;Misra et al, 2019;Schwartz et al, 2017;Silva et al, 2022). This bandit learning step solves problem ii).…”
Section: Introductionmentioning
confidence: 99%