2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2016
DOI: 10.1109/allerton.2016.7852251
|View full text |Cite
|
Sign up to set email alerts
|

Learning to decode linear codes using deep learning

Abstract: Abstract-A novel deep learning method for improving the belief propagation algorithm is proposed. The method generalizes the standard belief propagation algorithm by assigning weights to the edges of the Tanner graph. These edges are then trained using deep learning techniques. A well-known property of the belief propagation algorithm is the independence of the performance on the transmitted codeword. A crucial property of our new method is that our decoder preserved this property. Furthermore, this property a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

6
473
0
5

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 450 publications
(484 citation statements)
references
References 19 publications
6
473
0
5
Order By: Relevance
“…Among future directions, it is worth considering to combine other neural decodes with MIND, such as neural LDPC [29] [30] and Polar [32] decoders. Beyond neural decoder design, MAML can also be applied to Channel Autoencoder [27] design, which deals with designing adaptive encoder and decoder.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Among future directions, it is worth considering to combine other neural decodes with MIND, such as neural LDPC [29] [30] and Polar [32] decoders. Beyond neural decoder design, MAML can also be applied to Channel Autoencoder [27] design, which deals with designing adaptive encoder and decoder.…”
Section: Discussionmentioning
confidence: 99%
“…Designing neural decoder for several classes of codes such as LDPC codes, Polar codes and Turbo codes with versatile deep neural networks has seen a growing interest within the channel coding community. Imitating Belief Propagation (BP) algorithm via learnable neural networks shows promising performance for High-Density Parity-Check (HDPC) codes and LDPC codes [29] [30] and Polar codes [31] [32]. Near optimal performance of Convolutional Code and Turbo Code under AWGN channel is achieved via Recurrent Neural Networks (RNN) for arbitrary block lengths [33], which also shows robust and adaptive performance under non-AWGN setups.…”
Section: B Prior Art : Neural Decodingmentioning
confidence: 99%
“…The image or the attribute intermediate hash codes generated by ADCMH can be considered to be corrupted codewords within a certain distance d of a correct codeword of an ECC. Recently, some excellent neural network based ECC decoders have been proposed [26], [27], [28], which have achieved close to the maximum likelihood (ML) performance for an ECC. These methods can be leveraged to generate high-quality and efficient hash codes.…”
Section: Stage 1(b): Training Neural Error Correcting Decodermentioning
confidence: 99%
“…In this process, the attribute hash code and image hash code of the same subject are forced to map to the same codeword, thereby reducing the distance of the corresponding hash codes. This brings more relevant facial images from the gallery closer to the attribute query, which leads to an improved retrieval performance.Recent work has shown that the same kinds of neural network architectures used for classification can also be used to decode ECC codes [26], [27], [28]. Motivated by this, we have used a neural error-correction decoder (NECD) [26] as an ECC decoder to improve the cross-modal retrieval efficiency.…”
mentioning
confidence: 99%
See 1 more Smart Citation