2018
DOI: 10.3390/fi10070060
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Two-Layered Reinforcement Learning for Task Offloading with Tradeoff between Physical Machine Utilization Rate and Delay

Abstract: Abstract:Mobile devices could augment their ability via cloud resources in mobile cloud computing environments. This paper developed a novel two-layered reinforcement learning (TLRL) algorithm to consider task offloading for resource-constrained mobile devices. As opposed to existing literature, the utilization rate of the physical machine and the delay for offloaded tasks are taken into account simultaneously by introducing a weighted reward. The high dimensionality of the state space and action space might a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(14 citation statements)
references
References 26 publications
0
14
0
Order By: Relevance
“…8(d). Each Offloading cellular traffic to WLAN [93], (b) Offloading to a single MEC-enabled BS [94], (c) Offloading to one shared MEC server via multiple BSs [95], [96], [100], (d) Offloading to multiple MECenabled BSs [104], [105] and mobile cloudlets [106], [107]. mobile user has a first-in-first-out queue with limited buffer size to store the arriving tasks arriving as a Poisson process.…”
Section: B Data and Computation Offloadingmentioning
confidence: 99%
“…8(d). Each Offloading cellular traffic to WLAN [93], (b) Offloading to a single MEC-enabled BS [94], (c) Offloading to one shared MEC server via multiple BSs [95], [96], [100], (d) Offloading to multiple MECenabled BSs [104], [105] and mobile cloudlets [106], [107]. mobile user has a first-in-first-out queue with limited buffer size to store the arriving tasks arriving as a Poisson process.…”
Section: B Data and Computation Offloadingmentioning
confidence: 99%
“…In [128], a namely deep reinforcement learning based resource allocation (DRLRA) scheme is proposed to allocate computing and network resources adaptively, in order to reduce the delay and balance the use of resources under varying MEC environment. In [134], several RL methods, e.g., Q-learning, SARSA, Expected SARSA, and Monte Carlo, Utilization rate of computing resources [126] are applied to solve the Fog-RAN resource allocation issues respectively. The performance and applicability of the methods are verified.…”
Section: E Aiot Application Layer -Iot Edge/fog/cloud Computing Systemsmentioning
confidence: 99%
“…3) LBS for Task Offloading: Quan et al [164] proposed a two-layered RL algorithm for task offloading with a tradeoff between physical machine utilization rate and delay in M-CC. The k-nearest neighbors algorithm divides the physical machines into many clusters.…”
Section: Lbs In CC and Ecmentioning
confidence: 99%