SummaryWith the development of Internet of Things (IoT), more and more computation‐intensive tasks are generated by IoT devices. Due to the limitation of battery and computing capacity of IoT devices, these tasks can be offloaded to mobile edge computing (MEC) and cloud for processing. However, as the channel states and task generation process are dynamic, and the scales of task offloading problem and solution space size are increasing rapidly, the collaborative task offloading for MEC and cloud faces severe challenges. In this paper, we integrate the two conflicting offloading goals, which are maximizing the task finish ratio with tolerable delay and minimizing the power consumption of devices. We formulate the task offloading problem to balance the two conflicting goals. Then, we reformulate it as an MDP‐based dynamic task offloading problem. We design a deep reinforcement learning (DRL)‐based dynamic task offloading (DDTO) algorithm to solve this problem. Our DDTO algorithm can adapt to the dynamic and complex environment and adjust the task offloading strategies accordingly. Experiments are also carried out which show that our DDTO algorithm can converge quickly. The experiment results also validate the effectiveness and efficacy of our DDTO algorithm in balancing finish ratio and power.
As the computing resources and the battery capacity of mobile devices are usually limited, it is a feasible solution to offload the computation-intensive tasks generated by mobile devices to edge servers in mobile edge computing (MEC). In this paper, we study the multi-user multi-server task offloading problem in mobile edge computing systems, where all the users compete for the limited communication resources and computing resources. We formulate the offloading problem with the goal of minimizing the cost of the users and maximizing the profits of the edge servers. We propose a hierarchical Economic and Efficient Task Offloading and Resource Purchasing (EETORP) framework that includes a two-stage joint optimization process. Then, we prove that the problem is NP-complete. For the first stage, we formulate the offloading problem as a multi-channel access game (MCA-Game) and prove theoretically the existence of at least one Nash equilibrium strategy in the MCA-Game. Next, we propose a game-based multi-channel access (GMCA) algorithm to obtain the Nash equilibrium strategy and analyze the performance guarantee of the obtained offloading strategy in the worst case. For the second stage, we model the computing resource allocation between the users and edge servers by Stackelberg game theory, and reformulate the problem as a resource pricing and purchasing game (PAP-Game). We prove theoretically the property of incentive compatibility and the existence of Stackelberg equilibrium. A game-based pricing and purchasing (GPAP) algorithm is proposed. Finally, a series of both parameter experiments and comparison experiments are carried out, which validate the convergence and effectiveness of the GMCA and GPAP algorithms.
The space-air-ground integrated network (SAGIN) has become a crucial research direction in future wireless communications due to its ubiquitous coverage, rapid and flexible deployment, and multi-layer cooperation capabilities. However, integrating hierarchical federated learning (HFL) with edge computing and SAGINs remains a complex open issue to be resolved. This paper proposes a novel framework for applying HFL in SAGINs, utilizing aerial platforms and low Earth orbit (LEO) satellites as edge servers and cloud servers, respectively, to provide multi-layer aggregation capabilities for HFL. The proposed system also considers the presence of inter-satellite links (ISLs), enabling satellites to exchange federated learning models with each other. Furthermore, we consider multiple different computational tasks that need to be completed within a limited satellite service time. To maximize the convergence performance of all tasks while ensuring fairness, we propose the use of the distributional softactor-critic (DSAC) algorithm to optimize resource allocation in the SAGIN and aggregation weights in HFL. Moreover, we address the efficiency issue of hybrid action spaces in deep reinforcement learning (DRL) through a decoupling and recoupling approach, and design a new dynamic adjusting reward function to ensure fairness among multiple tasks in federated learning. Simulation results demonstrate the superiority of our proposed algorithm, consistently outperforming baseline approaches and offering a promising solution for addressing highly complex optimization problems in SAGINs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.