To accommodate the explosive wireless traffics, massive multiple-input multiple-output (MIMO) is regarded as one of the key enabling technologies for next-generation communication systems. In massive MIMO cellular networks, coordinated beamforming (CBF), which jointly designs the beamformers of multiple base stations (BSs), is an efficient method to enhance the network performance. In this paper, we investigate the sum rate maximization problem in a massive MIMO mobile cellular network, where in each cell a multi-antenna BS serves multiple mobile users simultaneously via downlink beamforming.Although existing optimization-based CBF algorithms can provide near-optimal solutions, they require realtime and global channel state information (CSI), in addition to their high computation complexity. It is almost impossible to apply them in practical wireless networks, especially highly dynamic mobile cellular networks. Motivated by this, we propose a deep reinforcement learning based distributed dynamic coordinated beamforming (DDCBF) framework, which enables each BS to determine the beamformers with only local CSI and some historical information from other BSs.Besides, the beamformers can be calculated with a considerably lower computational complexity by exploiting neural networks and expert knowledge, i.e., a solution structure observed from the iterative procedure of the weighted minimum mean square error (WMMSE) algorithm. Moreover, we provide extensive numerical simulations to validate the effectiveness of the proposed DRL-based approach. With lower computational complexity
Due to the characteristics of global coverage, on-demand access, and large capacity, the low earth orbit (LEO) satellite communication (SatCom) has become one promising technology to support the Internet-of-Things (IoT). However, due to the scarcity of satellite spectrum and the high cost of designing satellites, it is difficult to launch a dedicated satellite for IoT communications. To facilitate IoT communications over LEO SatCom, in this paper, we propose the cognitive LEO satellite system, where the IoT users act as the secondary user to access the legacy LEO satellites and cognitively use the spectrum of the legacy LEO users. Due to the flexibility of code division multiple access (CDMA) in multiple access and the wide use of CDMA in LEO SatCom, we apply CDMA to support cognitive satellite IoT communications. For the cognitive LEO satellite system, we are interested in the achievable rate analysis and resource allocation. Specifically, considering the randomness of spreading codes, we use the random matrix theory to analyze the asymptotic signal-to-interference-plus-noise ratios (SINRs) and accordingly obtain the achievable rates for both legacy and IoT systems. The power of the legacy and IoT transmissions at the receiver are jointly allocated to maximize the sum rate of the IoT transmission subject to the legacy satellite system performance requirement and the maximum received power constraints. We prove that the sum rate of the IoT users is quasi-concave over the satellite terminal receive power, based on which the optimal receive powers for these two systems are derived. Finally, the resource allocation scheme proposed in this paper has been verified by extensive simulations.
Large-dimensional (LD) random matrix theory, RMT for short, which originates from the research field of quantum physics, has shown tremendous capability in providing deep insights into large-dimensional systems. With the fact that we have entered an unprecedented era full of massive amounts of data and large complex systems, RMT is expected to play more important roles in the analysis and design of modern systems. In this paper, we review the key results of RMT and its applications in two emerging fields: wireless communications and deep learning. In wireless communications, we show that RMT can be exploited to design the spectrum sensing algorithms for cognitive radio systems and to perform the design and asymptotic analysis for large communication systems. In deep learning, RMT can be utilized to analyze the Hessian, input–output Jacobian and data covariance matrix of the deep neural networks, thereby to understand and improve the convergence and the learning speed of the neural networks. Finally, we highlight some challenges and opportunities in applying RMT to the practical large-dimensional systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.