NOMA and MIMO are considered to be the promising technologies to meet huge access demands and high data rate requirements of 5G wireless networks. In this paper, the power allocation problem in a downlink MIMO-NOMA system to maximize the energy efficiency while ensuring the quality-of-service of all users is investigated. Two deep reinforcement learning-based frameworks are proposed to solve this non-convex and dynamic optimization problem, referred to as the multi-agent DDPG/TD3-based power allocation framework. In particular, with current channel conditions as input, every single agent of two multi-agent frameworks dynamically outputs the optimum power allocation policy for all users in every cluster by DDPG/TD3 algorithm, and the additional actor network is also added to the conventional multi-agent model in order to adjust power volumes allocated to clusters to improve overall performance of the system. Finally, both frameworks adjust the entire power allocation policy by updating the weights of neural networks according to the feedback of the system. Simulation results show that the proposed multi-agent deep reinforcement learning based power allocation frameworks can significantly improve the energy efficiency of the MIMO-NOMA system under various transmit power limitations and minimum data rates compared with other approaches, including the performance comparison over MIMO-OMA.
The fog radio access network (F-RAN) has been regarded as a promising wireless access network architecture in the fifth generation (5G) and beyond systems to satisfy the increasing requirements for low-latency and high-throughput services by providing fog computing. However, because the cloud computing centre and fog computing-enabled access points (F-APs) in the F-RAN have different computation and communication capabilities, it is crucial to make an efficient computation offloading and resource allocation strategy that can fully exploit the potential of the F-RAN system. In this paper, the authors investigate a decentralized low-complexity deep reinforcement learning (DRL)based framework for joint computation task offloading and resource allocation in the F-RAN, which supports assistive computing-enabled tasks offloading between F-APs. Considering the constraints of task latency, wireless transmission rate, transmission power, and computational resource capacity, the authors formulate the system processing efficiency maximization problem by jointly optimizing offloading mode selection, channel allocation, power control, and computation resource allocation in the F-RAN. To solve this non-linear and non-convex problem, the authors propose a federated DRL-based computation offloading and resource allocation algorithm to improve the task processing efficiency and ensure privacy in the system, which can significantly reduce the computing complexity and signalling overhead of the training process compared with the centralized learning-based method. Specifically, each local F-AP agent consists of dueling deep Q-network (DDQN) and deep deterministic policy gradient (DDPG) networks to appropriately deal with discrete and continuous valuable action spaces, respectively. Finally, the simulation results show that the proposed federated DRL algorithm can achieve significant performance improvements in terms of system processing efficiency and task latency compared with other benchmarks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.