In frequency division duplex mode, the downlink channel state information (CSI) should be sent to the base station through feedback links so that the potential gains of a massive multiple-input multiple-output can be exhibited. However, such a transmission is hindered by excessive feedback overhead. In this letter, we use deep learning technology to develop CsiNet, a novel CSI sensing and recovery mechanism that learns to effectively use channel structure from training samples. CsiNet learns a transformation from CSI to a near-optimal number of representations (or codewords) and an inverse transformation from codewords to CSI. We perform experiments to demonstrate that CsiNet can recover CSI with significantly improved reconstruction quality compared with existing compressive sensing (CS)-based methods. Even at excessively low compression regions where CS-based methods cannot work, CsiNet retains effective beamforming gain.Index Terms-Massive MIMO, FDD, compressed sensing, deep learning, conventional neural network.
Large intelligent surface (LIS)-assisted wireless communications have drawn attention worldwide. With the use of low-cost LIS on building walls, signals can be reflected by the LIS and sent out along desired directions by controlling its phases, thereby providing supplementary links for wireless communication systems. In this study, we evaluate the performance of an LIS-assisted large-scale antenna system by formulating a tight approximation of the ergodic capacity and investigate the effect of the phase shifts on the ergodic capacity in different propagation scenarios. In particular, we propose an optimal phase shift design based on the ergodic capacity approximation and statistical channel state information. Furthermore, we derive the requirement on the quantization bits of the LIS to promise an acceptable capacity degradation. Numerical results show that using the proposed phase shift design can achieve the maximum ergodic capacity, and a 2-bit quantizer is sufficient to ensure capacity degradation of no more than 1 bit/s/Hz.
Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave massive multipleinput and multiple-output systems. To solve this problem, we exploit a learned denoising-based approximate message passing (LDAMP) network. This neural network can learn channel structure and estimate channel from a large number of training data. Furthermore, we provide an analytical framework on the asymptotic performance of the channel estimator. Based on our analysis and simulation results, the LDAMP neural network significantly outperforms state-of-the-art compressed sensing-based algorithms even when the receiver is equipped with a small number of RF chains.
In this letter, we derive an approximate analytical expression for the uplink achievable rate of a massive multiinput multi-output (MIMO) antenna system when finite precision analog-digital converters (ADCs) and the common maximalratio combining technique are used at the receivers. To obtain this expression, we treat quantization noise as an additive quantization noise model. Considering the obtained expression, we show that low-resolution ADCs lead to a decrease in the achievable rate but the performance loss can be compensated by increasing the number of receiving antennas. In addition, we investigate the relation between the number of antennas and the ADC resolution, as well as the power-scaling law. These discussions support the feasibility of equipping highly economical ADCs with low resolution in practical massive MIMO systems.Index Terms-massive MIMO, quantization, AQNM, uplink rate, MRC.
Abstract-Machine learning (ML) has been widely applied to the upper layers of wireless communication systems for various purposes, such as deployment of cognitive radio and communication network. However, its application to the physical layer is hampered by sophisticated channel environments and limited learning ability of conventional ML algorithms. Deep learning (DL) has been recently applied for many fields, such as computer vision and natural language processing, given its expressive capacity and convenient optimization capability. The potential application of DL to the physical layer has also been increasingly recognized because of the new features for future communications, such as complex scenarios with unknown channel models, high speed and accurate processing requirements; these features challenge conventional communication theories. This paper presents a comprehensive overview of the emerging studies on DL-based physical layer processing, including leveraging DL to redesign a module of the conventional communication system (for modulation recognition, channel decoding, and detection) and replace the communication system with a radically new architecture based on an autoencoder. These DL-based methods show promising performance improvements but have certain limitations, such as lack of solid analytical tools and use of architectures that are specifically designed for communication and implementation research, thereby motivating future research in this field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.