Proceedings of the ACM Web Conference 2023 2023
DOI: 10.1145/3543507.3583298
|View full text |Cite
|
Sign up to set email alerts
|

Learning Cooperative Oversubscription for Cloud by Chance-Constrained Multi-Agent Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…What is the post-convergence stability of CO I N in presence of uncertainty as opposed to vanilla IL? Baselines: In this paper, we compare our approach against the following baselines (1) Grid search with different oversubscription probabilities, where all the subscribers have the same oversubscription rate (2) Vanilla IL such as Behavior Cloning or BC, (3) Policy gradient reinforcement learning such as DDPG (Lillicrap et al 2015) (4) Multi-Agent reinforcement learning or MA (Sheng et al 2022) and ( 5) IL with hard constraints. Most existing resource management or oversubscription frameworks in practice (Salahuddin, Al-Fuqaha, and Guizani 2016;Gosavi, Bandla, and Das 2002;Kumbhare et al 2021;Shihab et al 2019;Lawhead and Gosavi 2019) exploit traditional reinforcement learning as the optimization strategy.…”
Section: Methodsmentioning
confidence: 99%
“…What is the post-convergence stability of CO I N in presence of uncertainty as opposed to vanilla IL? Baselines: In this paper, we compare our approach against the following baselines (1) Grid search with different oversubscription probabilities, where all the subscribers have the same oversubscription rate (2) Vanilla IL such as Behavior Cloning or BC, (3) Policy gradient reinforcement learning such as DDPG (Lillicrap et al 2015) (4) Multi-Agent reinforcement learning or MA (Sheng et al 2022) and ( 5) IL with hard constraints. Most existing resource management or oversubscription frameworks in practice (Salahuddin, Al-Fuqaha, and Guizani 2016;Gosavi, Bandla, and Das 2002;Kumbhare et al 2021;Shihab et al 2019;Lawhead and Gosavi 2019) exploit traditional reinforcement learning as the optimization strategy.…”
Section: Methodsmentioning
confidence: 99%