This paper calculates new bounds on the size of the performance gap between random codes and the best possible codes. The first result shows that, for large block sizes, the ratio of the error probability of a random code to the sphere-packing lower bound on the error probability of every code on the binary symmetric channel (BSC) is small for a wide range of useful crossover probabilities. Thus even far from capacity, random codes have nearly the same error performance as the best possible long codes. The paper also demonstrates that a small reduction k 0k in the number of information bits conveyed by a codeword will make the error performance of an (n;k) random code better than the sphere-packing lower bound for an (n; k) code as long as the channel crossover probability is somewhat greater than a critical probability. For example, the sphere-packing lower bound for a long (n; k), rate 1=2, code will exceed the error probability of an (n;k) random code if k 0k > 10 and the crossover probability is between 0:035 and 0:11 = H 01 (1=2). Analogous results are presented for the binary erasure channel (BEC) and the additive white Gaussian noise (AWGN) channel. The paper also presents substantial numerical evaluation of the performance of random codes and existing standard lower bounds for the BEC, BSC, and the AWGN channel. These last results provide a useful standard against which to measure many popular codes including turbo codes, e.g., there exist turbo codes that perform within 0.6 dB of the bounds over a wide range of block lengths.Index Terms-Error-probability bounds, random codes, singleton bound, sphere-packing.
This paper analyzes the performance of concatenated coding systems operating over the binary-symmetric channel (BSC) by examining the loss of capacity resulting from each of the processing steps. The techniques described in this paper allow the separate evaluation of codes and decoders and thus the identification of where loss of capacity occurs. They are, moreover, very useful for the overall design of a communications system, e.g., for evaluating the benefits of inner decoders that produce side information. The first two sections of this paper provide a general technique (based on the coset weight distribution of a binary linear code) for calculating the composite capacity of the code and a BSC in isolation. The later sections examine the composite capacities of binary linear codes, the BSC, and various decoders. The composite capacities of the (8; 4) extended Hamming, (24; 12) extended Golay, and (48; 24) quadratic residue codes appear as examples throughout the paper. The calculations in these examples show that, in a concatenated coding system, having an inner decoder provide more information than the maximum-likelihood (ML) estimate to an outer decoder is not a computationally efficient technique, unless generalized minimum-distance decoding of an outer code is extremely easy. Specifically, for the (8; 4) extended Hamming and (24; 12) extended Golay inner codes, the gains from using any inner decoder providing side information, instead of a strictly ML inner decoder, are shown to be no greater than 0.77 and 0.34 dB, respectively, for a BSC crossover probability of 0.1 or less. However, if computationally efficient generalized minimumdistance decoders for powerful outer codes, e.g., Reed-Solomon codes, become available, they will allow the use of simple inner codes, since both simple and complex inner codes have very similar capacity losses.
This correspondence analyzes the performance of concatenated coding systems and modulation schemes operating over the additive white Gaussian noise (AWGN) channel by examining the loss of capacity resulting from each of the processing steps. The techniques described in this correspondence allow the separate evaluation of codes and decoders and thus the identification of where loss of capacity occurs. Knowledge of this capacity loss is very useful for the overall design of a communications system, e.g., for evaluating the benefits of inner decoders that produce information beyond the maximum-likelihood (ML) estimate. The first two sections of this correspondence provide a general technique for calculating the composite capacity of an orthogonal or a bi-orthogonal code and the AWGN channel in isolation. The later sections examine the composite capacities of an orthogonal or a bi-orthogonal code, the AWGN channel, and various inner decoders including the decoder estimating the bit-by-bit probability of a one, as is used in turbo-codes. The calculations in these examples show that the ML decoder introduces a large loss in capacity. Much of this capacity loss can be regained by using only slightly more complex inner decoders, e.g., a detector for M-ary frequency-shift keying (MFSK) that puts out the two most likely frequencies and the probability the ML estimate is correct produces significantly less degradation than one that puts out only the most likely frequency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.