2021 IEEE 20th International Symposium on Network Computing and Applications (NCA) 2021
DOI: 10.1109/nca53618.2021.9685413
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Reinforcement Learning for online scheduling of real-time tasks in the Edge/Fog-to-Cloud computing continuum

Abstract: The computing continuum model is a widely accepted and used approach that make possible the existence of applications that are very demanding in terms of low latency and high computing power. In this three-layered model, the Fog or Edge layer can be considered as the weak link in the chain, indeed the computing nodes whose compose it are generally heterogeneous and their uptime cannot be compared with the one offered by the Cloud. Taking into account these inexorable characteristics of the continuum, in this p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…Other recent studies raise the concern about the dynamic variation of metrics used for allocation policies, such as execution time or CPU utilization, which can vary significantly over time or on different nodes of the continuum, and, for this reason, they employ ML-based solutions to learn workload patterns [11]. In the computing continuum environment proposed in [12], each edge cluster contains a scheduler node in charge of receiving requests from clients and of taking decisions whether to execute the task locally, on the cloud or to reject it; the decision is based on a Reinforcement Learning (RL) engine which receives a positive reward for every task completed within a deadline. All the previous studies assume a global IoT workload balance.…”
Section: Related Workmentioning
confidence: 99%
“…Other recent studies raise the concern about the dynamic variation of metrics used for allocation policies, such as execution time or CPU utilization, which can vary significantly over time or on different nodes of the continuum, and, for this reason, they employ ML-based solutions to learn workload patterns [11]. In the computing continuum environment proposed in [12], each edge cluster contains a scheduler node in charge of receiving requests from clients and of taking decisions whether to execute the task locally, on the cloud or to reject it; the decision is based on a Reinforcement Learning (RL) engine which receives a positive reward for every task completed within a deadline. All the previous studies assume a global IoT workload balance.…”
Section: Related Workmentioning
confidence: 99%