2018
DOI: 10.1007/978-3-030-00916-8_12
|View full text |Cite
|
Sign up to set email alerts
|

A Reinforcement Learning Based Workflow Application Scheduling Approach in Dynamic Cloud Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…Methodology Parameters [17] RL to make a scheduling model for cloud computing Makespan, total power cost in datacenters, energy consumption, migration time [18] RL task scheduling with Q-learning Makespan, total power cost, energy consumption [19] Task scheduling optimization algorithm called MPSO Makespan, energy consumption, packet delivery ratio, trust value [20] FELARE Ontime task completion rate, energy saving [21] RLFTWS Makespan, resource usage [22] WBCNF Computation time of tasks [23] BRCH-GWO Makespan [24] RELIEF Communication delay, reliability [25] PSO-based multipurpose algorithm Turnaround time, makespan [26] Regressive WO algorithm Processing cost, load balancing tasks [27] GO Makespan, resource utilization [28] Dynamic task scheduling algorithm based on an improved GA Total execution time and resource utilization ratio [29] PSO based on an AC algorithm Task completion time, makespan [30] DRL-based task scheduling Makespan, computation time [31] MRLCC is an approach for organizing tasks that are based on Meta RL Energy consumption, total cost, makespan [32] A novel DRL-based framework Cost and throughput, makespan [33] RLFTWS Execution time, degree of imbalance [15] DRL model Response time, makespan, CPU utilization [34] Deep reinforcement learning with PPSO SLA violation, makespan [35] DRL Cloud is an NDR-learning-based RP and TS system Estimated completion time, resource utilization [36] Deep Q-network model Degree of imbalance, cost, makespan [37] SDM reinforcement learning Energy consumption, resource utilization [38] DRLHCE Response time, degree of imbalance [39] DQN Makespan, total cost [40] Reinforcement learning Makespan [41] DDDQN-TS Task response time [31] Q-learning Makespan…”
Section: Authormentioning
confidence: 99%
“…Methodology Parameters [17] RL to make a scheduling model for cloud computing Makespan, total power cost in datacenters, energy consumption, migration time [18] RL task scheduling with Q-learning Makespan, total power cost, energy consumption [19] Task scheduling optimization algorithm called MPSO Makespan, energy consumption, packet delivery ratio, trust value [20] FELARE Ontime task completion rate, energy saving [21] RLFTWS Makespan, resource usage [22] WBCNF Computation time of tasks [23] BRCH-GWO Makespan [24] RELIEF Communication delay, reliability [25] PSO-based multipurpose algorithm Turnaround time, makespan [26] Regressive WO algorithm Processing cost, load balancing tasks [27] GO Makespan, resource utilization [28] Dynamic task scheduling algorithm based on an improved GA Total execution time and resource utilization ratio [29] PSO based on an AC algorithm Task completion time, makespan [30] DRL-based task scheduling Makespan, computation time [31] MRLCC is an approach for organizing tasks that are based on Meta RL Energy consumption, total cost, makespan [32] A novel DRL-based framework Cost and throughput, makespan [33] RLFTWS Execution time, degree of imbalance [15] DRL model Response time, makespan, CPU utilization [34] Deep reinforcement learning with PPSO SLA violation, makespan [35] DRL Cloud is an NDR-learning-based RP and TS system Estimated completion time, resource utilization [36] Deep Q-network model Degree of imbalance, cost, makespan [37] SDM reinforcement learning Energy consumption, resource utilization [38] DRLHCE Response time, degree of imbalance [39] DQN Makespan, total cost [40] Reinforcement learning Makespan [41] DDDQN-TS Task response time [31] Q-learning Makespan…”
Section: Authormentioning
confidence: 99%
“…Moreover, their reward is defined only from the runtime of the current task while we also take into account the waiting times. The method presented in Reference 66 also leverages reinforcement learning to generate a workflow scheduling plan to be executed in the cloud but there it is first necessary to classify the application tasks into different execution levels. Such execution levels are considered as part of the state to the MDP formulation.…”
Section: Related Workmentioning
confidence: 99%