GLOBECOM 2017 - 2017 IEEE Global Communications Conference 2017
DOI: 10.1109/glocom.2017.8254811
|View full text |Cite
|
Sign up to set email alerts
|

Scaling Deep Learning-Based Decoding of Polar Codes via Partitioning

Abstract: The training complexity of deep learning-based channel decoders scales exponentially with the codebook size and therefore with the number of information bits. Thus, neural network decoding (NND) is currently only feasible for very short block lengths. In this work, we show that the conventional iterative decoding algorithm for polar codes can be enhanced when sub-blocks of the decoder are replaced by neural network (NN) based components. Thus, we partition the encoding graph into smaller sub-blocks and train t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
98
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 185 publications
(98 citation statements)
references
References 14 publications
0
98
0
Order By: Relevance
“…We use the notation h = [h 1 , h 2 , ..., h L ] to represent a network with L hidden layers, where h l denotes the number of neurons in the fully connected layer l, or the number of kernels in the convolutional layer l. In recent works that apply DNNs to decode ECCs, the training set explodes rapidly as the source word length grows. For example, with a rate 0.5 (n = 1024, k = 512) ECC, one epoch consists of 2 512 possibilities of codewords of length 1024, which results in very large complexity and makes it difficult to train and implement DNN-based decoding in practical systems [28], [29], [31], [32]. However, we note that in FL CS decoding, this problem does not exist since CS source words are typically considerably shorter, possibly only up to a few dozen symbols [1], [6]- [17].…”
Section: Results and Outlookmentioning
confidence: 99%
See 2 more Smart Citations
“…We use the notation h = [h 1 , h 2 , ..., h L ] to represent a network with L hidden layers, where h l denotes the number of neurons in the fully connected layer l, or the number of kernels in the convolutional layer l. In recent works that apply DNNs to decode ECCs, the training set explodes rapidly as the source word length grows. For example, with a rate 0.5 (n = 1024, k = 512) ECC, one epoch consists of 2 512 possibilities of codewords of length 1024, which results in very large complexity and makes it difficult to train and implement DNN-based decoding in practical systems [28], [29], [31], [32]. However, we note that in FL CS decoding, this problem does not exist since CS source words are typically considerably shorter, possibly only up to a few dozen symbols [1], [6]- [17].…”
Section: Results and Outlookmentioning
confidence: 99%
“…Recently several works have reported the application of DNNs to the decoding of error control codes (ECCs) [28]- [33]. A DNN enables low-latency decoding since it enables one-shot decoding, where the DNN finds its estimate by passing each layer only once [28], [31], [32]. In addition, DNNs can efficiently execute in parallel and be implemented with low-precision data types on a graphical processing unit (GPU), field programmable gate array (FPGA), or application specific integrated circuit (ASIC) [28], [31]- [33], [35].…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Among future directions, it is worth considering to combine other neural decodes with MIND, such as neural LDPC [29] [30] and Polar [32] decoders. Beyond neural decoder design, MAML can also be applied to Channel Autoencoder [27] design, which deals with designing adaptive encoder and decoder.…”
Section: Discussionmentioning
confidence: 99%
“…Recently, as deep learning (DL) has many revolutionary breakthroughs in the field of computer vision and natural language processing, many researchers have also been dedicated to applying this powerful technique to enhance decoding algorithms [3]- [7]. However, their decoding capacity is still worse than the state-of-the-art CRC-assisted successive cancellation list (CA-SCL) [8]- [10].…”
Section: Introductionmentioning
confidence: 99%