2020 IEEE International Symposium on Information Theory (ISIT) 2020
DOI: 10.1109/isit44484.2020.9174097
|View full text |Cite
|
Sign up to set email alerts
|

Pruning Neural Belief Propagation Decoders

Abstract: We consider near maximum-likelihood (ML) decoding of short linear block codes. In particular, we propose a novel decoding approach based on neural belief propagation (NBP) decoding recently introduced by Nachmani et al. in which we allow a different parity-check matrix in each iteration of the algorithm. The key idea is to consider NBP decoding over an overcomplete parity-check matrix and use the weights of NBP as a measure of the importance of the check nodes (CNs) to decoding. The unimportant CNs are then pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
33
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 24 publications
(33 citation statements)
references
References 17 publications
0
33
0
Order By: Relevance
“…The error probability of the decoders proposed in [29] is within 0.27 dB and 1.5 dB of the ML performance, while significantly reducing the decoding complexity. In [30] the authors extend the work in [29] to further improve the error probability of neural BP decoders.…”
Section: Decoding Linear Block Codes With Machine Learningmentioning
confidence: 89%
See 2 more Smart Citations
“…The error probability of the decoders proposed in [29] is within 0.27 dB and 1.5 dB of the ML performance, while significantly reducing the decoding complexity. In [30] the authors extend the work in [29] to further improve the error probability of neural BP decoders.…”
Section: Decoding Linear Block Codes With Machine Learningmentioning
confidence: 89%
“…It was shown in [28] that the trainable parameters can be optimized using supervised learning techniques to greatly improve the error-correction performance of the code. In [29], the authors proposed a technique to efficiently prune the unimportant edges of the unrolled factor graph of a linear block code at each decoding iteration. The error probability of the decoders proposed in [29] is within 0.27 dB and 1.5 dB of the ML performance, while significantly reducing the decoding complexity.…”
Section: Decoding Linear Block Codes With Machine Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…of pruning-based NBP D 3 in[10]. Adding a single learned decimation (NBP-D(10, 4, 1)) improves the performance by 0.3 dB.…”
mentioning
confidence: 99%
“…Adding a single learned decimation (NBP-D(10, 4, 1)) improves the performance by 0.3 dB. Allowing four learned decimation steps (NBP-D(10,4,4)…”
mentioning
confidence: 99%