2021
DOI: 10.48550/arxiv.2108.04782
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bandit Algorithms for Precision Medicine

Yangyi Lu,
Ziping Xu,
Ambuj Tewari
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 74 publications
(97 reference statements)
0
5
0
Order By: Relevance
“…This setting has been extended to CBs (Kazerouni et al, 2017;Garcelon et al, 2020b;Lin et al, 2022) and Reinforcement learning (Garcelon et al, 2020a). For a more careful discussion on the above settings we refer the reader to (Lu et al, 2021).…”
Section: A Related Workmentioning
confidence: 99%
“…This setting has been extended to CBs (Kazerouni et al, 2017;Garcelon et al, 2020b;Lin et al, 2022) and Reinforcement learning (Garcelon et al, 2020a). For a more careful discussion on the above settings we refer the reader to (Lu et al, 2021).…”
Section: A Related Workmentioning
confidence: 99%
“…Literature shows that the lack of time is the main reason for not performing physical activities [24]. Studies indicate that even 30 minutes of movement per week can improve health [24].…”
Section: Exercise Organizationmentioning
confidence: 99%
“…Literature shows that the lack of time is the main reason for not performing physical activities [24]. Studies indicate that even 30 minutes of movement per week can improve health [24]. Hence, since we had a healthy younger population cohort, we chose 15 and 20 minutes as the baseline for level 1 and 2 groups, respectively.…”
Section: Exercise Organizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Sequential decision-making is one of the essential components of modern artificial intelligence that considers the dynamics of the real world. By maintaining the trade-off between exploration and exploitation based on historical information, the bandit algorithms aim to maximize the cumulative outcome of interest and thus are popular in dynamic decision optimization with a wide variety of applications, such as precision medicine (Lu et al, 2021) and dynamic pricing (Turvey, 2017). There is a large literature on bandit optimization established during recent decades (see the reviews in Sutton and Barto, 2018;Lattimore and Szepesvári, 2020).…”
Section: Introductionmentioning
confidence: 99%