2022
DOI: 10.3390/app12147137
|View full text |Cite
|
Sign up to set email alerts
|

Solving Task Scheduling Problems in Dew Computing via Deep Reinforcement Learning

Abstract: Due to mobile and IoT devices’ ubiquity and their ever-growing processing potential, Dew computing environments have been emerging topics for researchers. These environments allow resource-constrained devices to contribute computing power to others in a local network. One major challenge in these environments is task scheduling: that is, how to distribute jobs across devices available in the network. In this paper, we propose to distribute jobs in Dew environments using artificial intelligence (AI). Specifical… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 51 publications
0
7
0
Order By: Relevance
“…Finally, in the context of job offloading in dew computing environments, prior research demonstrates that deep RL agents can offload jobs more effectively than traditional state-of-the-art heuristic methods, even when faced with previously unseen scenarios. The study conducted by [9] empirically proves that the agent learns to generalize in network environments that lack dynamic components when continuously exposed to new situations. This means that the agent can appropriately distribute sequences of jobs that arrive in patterns and sizes not encountered during training.…”
Section: Rl Methods In Edge and Dew Computingmentioning
confidence: 86%
See 4 more Smart Citations
“…Finally, in the context of job offloading in dew computing environments, prior research demonstrates that deep RL agents can offload jobs more effectively than traditional state-of-the-art heuristic methods, even when faced with previously unseen scenarios. The study conducted by [9] empirically proves that the agent learns to generalize in network environments that lack dynamic components when continuously exposed to new situations. This means that the agent can appropriately distribute sequences of jobs that arrive in patterns and sizes not encountered during training.…”
Section: Rl Methods In Edge and Dew Computingmentioning
confidence: 86%
“…Furthermore, the agent can learn to effectively distribute jobs in fixed dew environments, significantly outperforming state-of-the-art heuristics regarding the number of instructions executed per second. This highlights the potential of RL in enhancing the efficiency and adaptability of job scheduling in dew computing environments, paving the way for more responsive and robust computing systems [9].…”
Section: Rl Methods In Edge and Dew Computingmentioning
confidence: 96%
See 3 more Smart Citations