It is known the belief propagation variants of linear codes can be readily unrolled as neural networks, after assigning learnable weights on the message-passing edges. Contrary to the conventional top-down training process, where the distillation occurs in the form of pruning or sharing when downsizing model is required, a new bottom-up design methodology to augment performance of the raw min-sum decoder of LDPC codes is proposed, by introducing incrementally a few parameters in the specific positions of corresponding neural network. Then a novel postprocessing method, devised to further improve performance, can cope with decoding failures effectively. In the training process, a simplified scheme of generating training data is presented via exploiting an approximation to the targeted mixture density, and it is found the evaluation of trained parameters converges after sufficient iterations, indicating its generality with an arbitrary designated number of iterations. Lastly, an extensive simulation of three codes carried on the AWGN or Rayleigh fading channels demonstrates the design reaches a good tradeoff of low-complexity and comparable decoding performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.