<p>Reliable and fast channel estimation is crucial for next-generation wireless networks supporting a wide range of vehicular and low-latency services. Recently, deep learning (DL) based channel estimation has been explored as an efficient alternative to conventional least-square (LS) and linear minimum mean square error (LMMSE) approaches. Most of these DL approaches have not been realized on system-on-chip (SoC), and preliminary study shows that their complexity exceeds the complexity of the entire physical layer (PHY). The high latency of DL is another concern. This paper considers the design and implementation of deep neural network (DNN) augmented LS-based channel estimation (LSDNN) for { \color{black} preamble-based orthogonal frequency-division multiplexing (OFDM) physical layer (PHY)} on SoC. We demonstrate the gain in performance compared to the conventional LS and LMMSE approaches. Via software-hardware co-design, word-length optimization, and reconfigurable architectures, we demonstrate the superiority of the LSDNN over the LS and LMMSE for a wide range of signal-to-noise ratio (SNR), number of pilots, preamble types, and wireless channels. Further, we evaluate the performance, power, and area (PPA) of the LS and LSDNN application-specific integrated circuit (ASIC) implementations in 45 nm technology. We demonstrate that word-length optimization can substantially improve PPA for the proposed architecture in ASIC implementations. </p>
<p>Reliable and fast channel estimation is crucial for next-generation wireless networks supporting a wide range of vehicular and low-latency services. Recently, deep learning (DL) based channel estimation is being explored as an efficient alternative to conventional least-square (LS) and linear minimum mean square error (LMMSE) based channel estimation. Unlike LMMSE, DL methods do not need prior knowledge of channel statistics. Most of these approaches have not been realized on system-on-chip (SoC), and preliminary study shows that their complexity exceeds the complexity of the entire physical layer (PHY). The high latency of DL is another concern. This paper considers the design and implementation of deep neural network (DNN) augmented LS-based channel estimation (LSDNN) on Zynq multi-processor SoC (ZMPSoC). We demonstrate the gain in performance compared to the conventional LS and LMMSE channel estimation schemes. Via software-hardware co-design, word-length optimization, and reconfigurable architectures, we demonstrate the superiority of the LSDNN architecture over the LS and LMMSE for a wide range of SNR, number of pilots, preamble types, and wireless channels. Further, we evaluate the performance, power, and area (PPA) of the LS and LSDNN application-specific integrated circuit (ASIC) implementations in 45 nm technology. We demonstrate that the word-length optimization can substantially improve PPA for the proposed architecture in ASIC implementations.</p>
<p>Reliable and fast channel estimation is crucial for next-generation wireless networks supporting a wide range of vehicular and low-latency services. Recently, deep learning (DL) based channel estimation is being explored as an efficient alternative to conventional least-square (LS) and linear minimum mean square error (LMMSE) based channel estimation. Unlike LMMSE, DL methods do not need prior knowledge of channel statistics. Most of these approaches have not been realized on system-on-chip (SoC), and preliminary study shows that their complexity exceeds the complexity of the entire physical layer (PHY). The high latency of DL is another concern. This paper considers the design and implementation of deep neural network (DNN) augmented LS-based channel estimation (LSDNN) on Zynq multi-processor SoC (ZMPSoC). We demonstrate the gain in performance compared to the conventional LS and LMMSE channel estimation schemes. Via software-hardware co-design, word-length optimization, and reconfigurable architectures, we demonstrate the superiority of the LSDNN architecture over the LS and LMMSE for a wide range of SNR, number of pilots, preamble types, and wireless channels. Further, we evaluate the performance, power, and area (PPA) of the LS and LSDNN application-specific integrated circuit (ASIC) implementations in 45 nm technology. We demonstrate that the word-length optimization can substantially improve PPA for the proposed architecture in ASIC implementations.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.