Computation offloading technology extends cloud computing to the edge of the access network close to users, bringing many benefits to terminal devices with limited battery and computational resources. Nevertheless, the existing computation offloading approaches are challenging to apply to specific scenarios, such as the dense distribution of end-users and the sparse distribution of network infrastructure. The technological revolution in the unmanned aerial vehicle (UAV) and chip industry has granted UAVs more computing resources and promoted the emergence of UAV-assisted mobile edge computing (MEC) technology, which could be applied to those scenarios. However, in the MEC system with multiple users and multiple servers, making reasonable offloading decisions and allocating system resources is still a severe challenge. This paper studies the offloading decision and resource allocation problem in the UAV-assisted MEC environment with multiple users and servers. To ensure the quality of service for end-users, we set the weighted total cost of delay, energy consumption, and the size of discarded tasks as our optimization objective. We further formulate the joint optimization problem as a Markov decision process and apply the soft actor–critic (SAC) deep reinforcement learning algorithm to optimize the offloading policy. Numerical simulation results show that the offloading policy optimized by our proposed SAC-based dynamic computing offloading (SACDCO) algorithm effectively reduces the delay, energy consumption, and size of discarded tasks for the UAV-assisted MEC system. Compared with the fixed local-UAV scheme in the specific simulation setting, our proposed approach reduces system delay and energy consumption by approximately 50% and 200%, respectively.
As a promising paradigm, computation offloading technology can offload computing tasks to multi-access edge computing (MEC) servers, which is an appealing choice for resource-constrained end-devices to reduce their computational effort. However, due to limited resources, one crucial research challenge for computation offloading is to design an appropriate offloading policy to determine which tasks should be offloaded in some complex circumstances. In this paper, we study the offloading decision problem in a software-defined networking (SDN) driven MEC environment with multiple users and multiple servers. To ensure that end-users do not abuse the computing resources in the MEC system, we formulate the profit of MEC servers as our optimization objective. We jointly optimize the selection of MEC servers, the size of offloading data, and the price of MEC computing service to maximize the profit of MEC servers. However, considering the dynamic and stochastic of end-users, it is challenging to obtain an optimal policy in such a MEC environment. We apply deep reinforcement learning (DRL) and Game theory to our proposed approach. Specifically, we propose a proximal policy optimization (PPO) reinforcement learning framework to tackle the selection of MEC servers. Secondly, a two-step optimization problem was formulated to determine the size of offloading data and the pricing of computing services. The optimal values of those two were determined by achieving the Nash equilibrium of the strategy game between end-users. Extensive simulation results prove that our proposal has a better performance than existing solutions in convergence time and stability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.