Abstract-Hybrid Automatic Repeat reQuest (HARQ) has become an essential error control technique in communication networks, which relies on a combination of arbitrary error correction codes and retransmissions. When combining turbo codes with HARQ, the associated complexity becomes a critical issue, since conventionally iterative decoding is immediately activated after each transmission, even though the iterative decoder might fail in delivering an error-free codeword even after a high number of iterations. In this scenario, precious battery-power would be wasted. In order to reduce the associated complexity, we will present design examples based on Multiple Components Turbo Codes (MCTCs) and demonstrate that they are capable of achieving an excellent performance based on the lowest possible memory octally represented generator polynomial (2, 3)o. In addition to using low-complexity generator polynomials, we detail two further techniques conceived for reducing the complexity. Firstly, an Early Stopping (ES) strategy is invoked for curtailing iterative decoding, when its Mutual Information (MI) improvements become less than a given threshold. Secondly, a novel Deferred Iteration (DI) strategy is advocated for the sake of delaying iterative decoding, until the receiver confidently estimates that it has received sufficient information for successful decoding. Our simulation results demonstrate that the MCTC aided HARQ schemes are capable of significantly reducing the complexity of the appropriately selected benchmarkers, which is achieved without degrading the Packet Loss Ratio (PLR) and throughput. codes. Figure 1.1 of [1] outlines the brief history of FEC codes. The ratio of the number of information bits to the total number of information and parity bits defines the normalized throughput or the coding rate. Shannon's channel capacity determines the upper bound of the coding rate that any FEC code may be able to achieve at a certain Signal Noise Ratio (SNR). Since the transmission of these parity bits requires an increased bandwidth, the maximum coding rate that an FEC code can have, while still recovering the information bits becomes a useful criterion for quantifying the capability of FEC codes. Their decoding complexity is also a critical factor in the evaluation of FEC codes. Researchers have studied the associated tradeoff between these two aspects, when choosing a specific FEC code for a communication system.
I. TURBO CODING COMPLEXITY
A. Turbo Code ComplexityAs one of the most powerful codes in the FEC family, turbo codes [4] have shown capacity-achieving capability by combining two parallel concatenated Recursive Systematic Convolutional (RSC) codes at the transmitter. At the receiver, iterative exchange of soft information is carried out between the two so-called Bahl, Cocke, Jelinek and Raviv (BCJR) decoders [5]. The BCJR decoder is also often referred to as the Maximum A posteriori (MAP) algorithm, which estimates a decoded bit by selecting the specific transition path having the maximum a posteriori pr...