2020
DOI: 10.1109/tccn.2020.2980529
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Scheduler Management Using Deep Learning

Abstract: The ability to manage the distributed functionality of large multi-vendor networks will be an important step towards ultra-dense 5G networks. Managing distributed scheduling functionality is particularly important, due to its influence over inter-cell interference and the lack of standardization for schedulers. In this paper, we formulate a method of managing distributed scheduling methods across a small cluster of cells by dynamically selecting schedulers to be implemented at each cell. We use deep reinforcem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…Methodology Parameters [17] RL to make a scheduling model for cloud computing Makespan, total power cost in datacenters, energy consumption, migration time [18] RL task scheduling with Q-learning Makespan, total power cost, energy consumption [19] Task scheduling optimization algorithm called MPSO Makespan, energy consumption, packet delivery ratio, trust value [20] FELARE Ontime task completion rate, energy saving [21] RLFTWS Makespan, resource usage [22] WBCNF Computation time of tasks [23] BRCH-GWO Makespan [24] RELIEF Communication delay, reliability [25] PSO-based multipurpose algorithm Turnaround time, makespan [26] Regressive WO algorithm Processing cost, load balancing tasks [27] GO Makespan, resource utilization [28] Dynamic task scheduling algorithm based on an improved GA Total execution time and resource utilization ratio [29] PSO based on an AC algorithm Task completion time, makespan [30] DRL-based task scheduling Makespan, computation time [31] MRLCC is an approach for organizing tasks that are based on Meta RL Energy consumption, total cost, makespan [32] A novel DRL-based framework Cost and throughput, makespan [33] RLFTWS Execution time, degree of imbalance [15] DRL model Response time, makespan, CPU utilization [34] Deep reinforcement learning with PPSO SLA violation, makespan [35] DRL Cloud is an NDR-learning-based RP and TS system Estimated completion time, resource utilization [36] Deep Q-network model Degree of imbalance, cost, makespan [37] SDM reinforcement learning Energy consumption, resource utilization [38] DRLHCE Response time, degree of imbalance [39] DQN Makespan, total cost [40] Reinforcement learning Makespan [41] DDDQN-TS Task response time [31] Q-learning Makespan…”
Section: Authormentioning
confidence: 99%
“…Methodology Parameters [17] RL to make a scheduling model for cloud computing Makespan, total power cost in datacenters, energy consumption, migration time [18] RL task scheduling with Q-learning Makespan, total power cost, energy consumption [19] Task scheduling optimization algorithm called MPSO Makespan, energy consumption, packet delivery ratio, trust value [20] FELARE Ontime task completion rate, energy saving [21] RLFTWS Makespan, resource usage [22] WBCNF Computation time of tasks [23] BRCH-GWO Makespan [24] RELIEF Communication delay, reliability [25] PSO-based multipurpose algorithm Turnaround time, makespan [26] Regressive WO algorithm Processing cost, load balancing tasks [27] GO Makespan, resource utilization [28] Dynamic task scheduling algorithm based on an improved GA Total execution time and resource utilization ratio [29] PSO based on an AC algorithm Task completion time, makespan [30] DRL-based task scheduling Makespan, computation time [31] MRLCC is an approach for organizing tasks that are based on Meta RL Energy consumption, total cost, makespan [32] A novel DRL-based framework Cost and throughput, makespan [33] RLFTWS Execution time, degree of imbalance [15] DRL model Response time, makespan, CPU utilization [34] Deep reinforcement learning with PPSO SLA violation, makespan [35] DRL Cloud is an NDR-learning-based RP and TS system Estimated completion time, resource utilization [36] Deep Q-network model Degree of imbalance, cost, makespan [37] SDM reinforcement learning Energy consumption, resource utilization [38] DRLHCE Response time, degree of imbalance [39] DQN Makespan, total cost [40] Reinforcement learning Makespan [41] DDDQN-TS Task response time [31] Q-learning Makespan…”
Section: Authormentioning
confidence: 99%
“…Thus, these works did not address, or include, the O-RAN architecture in their studies. In addition, a wide range of DL-based studies have also been proposed to deal with the main RAN challenges in 4G/5G networks [26][27] [28] [29]. However, these studies did not also consider the emerged O-RAN architecture, and hence need to be mapped/integrated into this architecture.…”
Section: A Review Of Related Workmentioning
confidence: 99%
“…In addition, inside each PW, RL based on asynchronous advantage actor-critic (A3C) algorithm is used to perform online resource scheduling. In [29], the authors addressed the distributed scheduling challenge in order to deal with inter-cell interference and with the lack of standardization for schedulers. They proposed a reinforcement deep learning-based (RL) approach to dynamically select the suitable scheduler to each cluster of small cells, based on the channel quality and QoS constraints of the users.…”
Section: ) Literature Reviewmentioning
confidence: 99%