2017 51st Annual Conference on Information Sciences and Systems (CISS) 2017
DOI: 10.1109/ciss.2017.7926071
|View full text |Cite
|
Sign up to set email alerts
|

On deep learning-based channel decoding

Abstract: Abstract-We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
362
0
12

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 492 publications
(375 citation statements)
references
References 21 publications
1
362
0
12
Order By: Relevance
“…Besides, it is important to note that in the offline training phase, the number of stages is fixed to 50 and 70 for C 1 and C 2 codes, respectively. According to (9c), (11) and (12), the total computational complexity of the ADMM L2 decoder in each iteration is roughly O(N + Γa) real multiplications + O(10(N + Γa) − 1) real applications + 2 real divisions. Since LADN (LADN I) finally perform as the ADMM L2 decoder loaded with learned parameters {α, µ} ({α, µ}), its computational complexity is the same as the ADMM L2 decoder, which is lower than that of the ML decoder, i.e., O(2 N ).…”
Section: Simulation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides, it is important to note that in the offline training phase, the number of stages is fixed to 50 and 70 for C 1 and C 2 codes, respectively. According to (9c), (11) and (12), the total computational complexity of the ADMM L2 decoder in each iteration is roughly O(N + Γa) real multiplications + O(10(N + Γa) − 1) real applications + 2 real divisions. Since LADN (LADN I) finally perform as the ADMM L2 decoder loaded with learned parameters {α, µ} ({α, µ}), its computational complexity is the same as the ADMM L2 decoder, which is lower than that of the ML decoder, i.e., O(2 N ).…”
Section: Simulation Resultsmentioning
confidence: 99%
“…Recent advances in deep learning (DL) provide a new direction to tackle tough signal processing tasks in communication systems, such as channel estimation [9], MIMO detection [10] and channel coding [11]- [13]. For channel coding, the work [11] proposed to use a fully connected neural network and showed that the performance of the network approaches that of the ML decoder for very small block codes. Then, in [12], the authors proposed to employ the recurrent neural network The authors are with College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China (email: {21731133, zmmblack, mjzhao, lm1029}@zju.edu.cn).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, the deep learning (DL) techniques have been developed rapidly and they have shown superior performance in many aspects of communication systems [17,18]. In this paper, we propose a novel DL-aided approach to design the read thresholds dynamically for the MLC flash memories.…”
Section: Introductionmentioning
confidence: 99%
“…Recently several works have reported the application of DNNs to the decoding of error control codes (ECCs) [28]- [33]. A DNN enables low-latency decoding since it enables one-shot decoding, where the DNN finds its estimate by passing each layer only once [28], [31], [32].…”
mentioning
confidence: 99%
“…Recently several works have reported the application of DNNs to the decoding of error control codes (ECCs) [28]- [33]. A DNN enables low-latency decoding since it enables one-shot decoding, where the DNN finds its estimate by passing each layer only once [28], [31], [32]. In addition, DNNs can efficiently execute in parallel and be implemented with low-precision data types on a graphical processing unit (GPU), field programmable gate array (FPGA), or application specific integrated circuit (ASIC) [28], [31]- [33], [35].…”
mentioning
confidence: 99%