[Proceedings] GLOBECOM '90: IEEE Global Telecommunications Conference and Exhibition
DOI: 10.1109/glocom.1990.116658
|View full text |Cite
|
Sign up to set email alerts
|

Neural network error correcting decoders for block and convolutional codes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0
1

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 3 publications
0
11
0
1
Order By: Relevance
“…First, channel decoding algorithms focus on bitlevel processing; therefore, bits, or the log-likelihood ratios (LLRs) of codewords, are conveniently treated as the inputs and expected outputs of NNs. Mainly in previous studies, the input and output nodes directly represent bits in codewords [38], or use one-hot representation (i.e., each node represents one of all possible codewords) [39] such that the corresponding vector has only one element equal to 1 and other elements equal to 0. Second, unlike the difficulty in obtaining datasets in other scenarios, man-made codewords can generate sufficient training samples and achieve labeled outputs simultaneously.…”
Section: B Channel Decodingmentioning
confidence: 99%
“…First, channel decoding algorithms focus on bitlevel processing; therefore, bits, or the log-likelihood ratios (LLRs) of codewords, are conveniently treated as the inputs and expected outputs of NNs. Mainly in previous studies, the input and output nodes directly represent bits in codewords [38], or use one-hot representation (i.e., each node represents one of all possible codewords) [39] such that the corresponding vector has only one element equal to 1 and other elements equal to 0. Second, unlike the difficulty in obtaining datasets in other scenarios, man-made codewords can generate sufficient training samples and achieve labeled outputs simultaneously.…”
Section: B Channel Decodingmentioning
confidence: 99%
“…With the advent of training techniques such as layer-by-layer unsupervised pre-training followed by gradient descent fine-tuning and back propagation, the interest for using ANNs for channel coding is renewed. Different ideas around the use of ANNs for decoding emerged in the 1990s with works such as [16]- [18] for decoding block and hamming codes. Subsequently, ANNs were used for decoding convolutional codes in [19], [20].…”
Section: Introductionmentioning
confidence: 99%
“…The Viterbi algorithm (VA) is an efficient method for decoding the convolutional code using the maximum likelihood sequence estimator technique. However, complexity, delay and memory requirement associated with the VA are relatively high and hence a sub‐optimal decoding of the convolutional code using the artificial neural network (ANN) has been intensively studied . ANN, as an alternative to the VA, has been suggested because of the low complexity, parallel processing, adaptability and fault tolerance capabilities.…”
Section: Introductionmentioning
confidence: 99%
“…ANN, as an alternative to the VA, has been suggested because of the low complexity, parallel processing, adaptability and fault tolerance capabilities. Although the performance of ANN as an error control decoder of block codes is comparable to the optimal decoder, the performance of the ANN as a convolutional decoder is far from being satisfactory . A number of structures and approaches have been proposed to improve the performance of ANN as a convolutional decoder.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation