2020 IEEE International Symposium on Information Theory (ISIT) 2020
DOI: 10.1109/isit44484.2020.9173971
|View full text |Cite
|
Sign up to set email alerts
|

Data-Driven Ensembles for Deep and Hard-Decision Hybrid Decoding

Abstract: Ensemble models are widely used to solve complex tasks by their decomposition into multiple simpler tasks, each one solved locally by a single member of the ensemble. Decoding of error-correction codes is a hard problem due to the curse of dimensionality, leading one to consider ensembles-of-decoders as a possible solution. Nonetheless, one must take complexity into account, especially in decoding. We suggest a low-complexity scheme where a single member participates in the decoding of each word. First, the di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…Nonetheless, SICNet can also be combined with alternative techniques to allow a DNN-aided receiver to track time-varying channel conditions at a modest overhead. These include the application of meta-learning for optimizing the hyperparameters of the training algorithm [29]; the pre-training of multiple receivers as a deep ensemble [30]; and the usage of soft symbol-level outputs, rather than FEC decoding, as a measure of confidence for producing labels from data, as proposed in [31], [32]. We leave the study of the combination of SICNet with these methods to facilitate online training for future investigations.…”
Section: B Fec-aided Online Trainingmentioning
confidence: 99%
“…Nonetheless, SICNet can also be combined with alternative techniques to allow a DNN-aided receiver to track time-varying channel conditions at a modest overhead. These include the application of meta-learning for optimizing the hyperparameters of the training algorithm [29]; the pre-training of multiple receivers as a deep ensemble [30]; and the usage of soft symbol-level outputs, rather than FEC decoding, as a measure of confidence for producing labels from data, as proposed in [31], [32]. We leave the study of the combination of SICNet with these methods to facilitate online training for future investigations.…”
Section: B Fec-aided Online Trainingmentioning
confidence: 99%
“…Nonetheless, ensembles also encompass computational complexity which is linear in the number of base learners, being unrealistic for practical considerations. To reduce complexity, our previous work [ 20 ] suggests to employ a low complexity gating decoder. This decoder allows one to uniquely map each input word to a single most fitting decoder, keeping the overall computation complexity low.…”
Section: A Data-driven Approach To Tbcc Decodingmentioning
confidence: 99%
“…One recent innovation, referred to as the ensemble of decoders [ 20 ], combined the benefits of model-based approach with the list decoding scheme. This ensemble is composed of learnable decoders, each one called an expert.…”
Section: Introductionmentioning
confidence: 99%
“…In (An et al, 2020), the Reed-Solomon neural decoder is introduced, which estimates the error of the received codewords, and adjust itself to do more accurate decoding. Neural Bose-Chaudhuri-Hocquenghem (BCH) codes decoding is introduced in (Kamassury & Silva, 2020;Nachmani & Wolf, 2019;Raviv et al, 2020).…”
Section: Error Correcting Codes With Deep Learningmentioning
confidence: 99%