2006
DOI: 10.1109/glocom.2006.558
|View full text |Cite
|
Sign up to set email alerts
|

SPC05-3: On the Parallelism of Convolutional Turbo Decoding and Interleaving Interference

Abstract: In forward error correction, convolutional turbo codes were introduced to increase error correction capability approaching the Shannon bound. Decoding of these codes, however, is an iterative process requiring high computation rate and latency. Thus, in order to achieve high throughput and to reduce latency, crucial in emerging digital communication applications, parallel implementations become mandatory. This paper explores and analyses existing parallelism techniques in convolutional turbo decoding with the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2008
2008
2013
2013

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…As presented in [15], implementing an efficient turbodecoder requires a good exploitation of the parallelism. In turbo decoding with the BCJR algorithm, parallelism techniques can be classified at three levels: (1) BCJR metric level parallelism, (2) BCJR SISO decoder level parallelism, and (3) Turbo-decoder level parallelism.…”
Section: Convolutional Turbo Decodingmentioning
confidence: 99%
“…As presented in [15], implementing an efficient turbodecoder requires a good exploitation of the parallelism. In turbo decoding with the BCJR algorithm, parallelism techniques can be classified at three levels: (1) BCJR metric level parallelism, (2) BCJR SISO decoder level parallelism, and (3) Turbo-decoder level parallelism.…”
Section: Convolutional Turbo Decodingmentioning
confidence: 99%
“…State-of-the-art research is mainly focused on parallel processing of frame subblocks and on the parallel processing issues, such as computation complexity [3,5], memory saving [3,5,6], initializations [5,7], or on-chip communication requirements [8,9]. Recently, a new parallelism technique named shuffled decoding was introduced to process in parallel the component decoders [10,11]. However, interactions between these diverse parallelism techniques and different granularity levels are rarely discussed.…”
Section: Introductionmentioning
confidence: 99%
“…This is due to the decoding process because no extrinsic information is available at the first iteration. It is possible to implement a shuffled iterative decoding [21,22] in order to achieve the same performance as for the classical turbo encoder and avoid adding iterations in practice. In shuffled decoding, the decoder updates the extrinsic information as soon as possible, without waiting for all the copies of a given data to be processed.…”
Section: Another Representation Of Turbo Codesmentioning
confidence: 99%