Abstract-We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.
We extend the idea of end-to-end learning of communications systems through deep neural network (NN)based autoencoders to orthogonal frequency division multiplexing (OFDM) with cyclic prefix (CP). Our implementation has the same benefits as a conventional OFDM system, namely singletap equalization and robustness against sampling synchronization errors, which turned out to be one of the major challenges in previous single-carrier implementations. This enables reliable communication over multipath channels and makes the communication scheme suitable for commodity hardware with imprecise oscillators. We show that the proposed scheme can be realized with state-of-the-art deep learning software libraries as transmitter and receiver solely consist of differentiable layers required for gradient-based training. We compare the performance of the autoencoder-based system against that of a state-of-the-art OFDM baseline over frequency-selective fading channels. Finally, the impact of a non-linear amplifier is investigated and we show that the autoencoder inherently learns how to deal with such hardware impairments.
The training complexity of deep learning-based channel decoders scales exponentially with the codebook size and therefore with the number of information bits. Thus, neural network decoding (NND) is currently only feasible for very short block lengths. In this work, we show that the conventional iterative decoding algorithm for polar codes can be enhanced when sub-blocks of the decoder are replaced by neural network (NN) based components. Thus, we partition the encoding graph into smaller sub-blocks and train them individually, closely approaching maximum a posteriori (MAP) performance per sub-block. These blocks are then connected via the remaining conventional belief propagation decoding stage(s). The resulting decoding algorithm is non-iterative and inherently enables a highlevel of parallelization, while showing a competitive bit error rate (BER) performance. We examine the degradation through partitioning and compare the resulting decoder to state-of-theart polar decoders such as successive cancellation list and belief propagation decoding.
We show that the performance of iterative belief propagation (BP) decoding of polar codes can be enhanced by decoding over different carefully chosen factor graph realizations. With a genie-aided stopping condition, it can achieve the successive cancellation list (SCL) decoding performance which has already been shown to achieve the maximum likelihood (ML) bound provided that the list size is sufficiently large. The proposed decoder is based on different realizations of the polar code factor graph with randomly permuted stages during decoding. Additionally, a different way of visualizing the polar code factor graph is presented, facilitating the analysis of the underlying factor graph and the comparison of different graph permutations. In our proposed decoder, a high rate Cyclic Redundancy Check (CRC) code is concatenated with a polar code and used as an iteration stopping criterion (i.e., genie) to even outperform the SCL decoder of the plain polar code (without the CRC-aid). Although our permuted factor graph-based decoder does not outperform the SCL-CRC decoder, it achieves, to the best of our knowledge, the best performance of all iterative polar decoders presented thus far.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.