2016
DOI: 10.1109/tit.2015.2496308
|View full text |Cite
|
Sign up to set email alerts
|

The Random Coding Bound Is Tight for the Average Linear Code or Lattice

Abstract: In 1973, Gallager proved that the random-coding bound is exponentially tight for the random code ensemble at all rates, even below expurgation. This result explained that the random-coding exponent does not achieve the expurgation exponent due to the properties of the random ensemble, irrespective of the utilized bounding technique. It has been conjectured that this same behavior holds true for a random ensemble of linear codes. This conjecture is proved in this paper. Additionally, it is shown that this prope… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 14 publications
0
10
0
Order By: Relevance
“…where p q = Unif(F q ). The general joint distribution of the codewords resulting from this construction can be found in [48,Theorem 1].…”
Section: Proofs Of Theorems 1 Andmentioning
confidence: 99%
“…where p q = Unif(F q ). The general joint distribution of the codewords resulting from this construction can be found in [48,Theorem 1].…”
Section: Proofs Of Theorems 1 Andmentioning
confidence: 99%
“…The described communication is just a transmission of a random linear code C = {vG, v ∈ {0, 1} k } through W ℓ , where the rate of the code is R = k ℓ ≤ I(W ) − ℓ −1/2 log 3 ℓ, so it is separated from the capacity of the channel. It is a well-studied fact that random (linear) codes achieve capacity for BMS, and moreover a tight error exponent was described by Gallager in [Gal65] and analyzed further in [BF02], [For05], [DZF16]. Specifically, one can show P e ≤ exp(−ℓE r (R, W )), where P e is the probability of decoding error, averaged over the ensemble of all linear codes of rate R, and…”
Section: Part (A): Channel Capacity Theoremmentioning
confidence: 99%
“…The main theorem (Theorem 1.1) follows directly from combining the results described in the previous sections. Namely, by using (6) in (4) with ≤ ℓ −1/2+5 from (7), we derive…”
Section: Proving the Main Theoremmentioning
confidence: 99%
“…where the rate of the code is = ℓ ≤ ( ) − ℓ −1/2 log 3 ℓ, so it is separated from the capacity of the channel. It is a wellstudied fact that random (linear) codes achieve capacity for BMS, and moreover a tight error exponent was described by Gallager in [12] and analyzed further in [3], [10], [6]. Specifically, one can show ≤ exp(−ℓ ( , )), where is the probability of decoding error, averaged over the ensemble of all linear codes of rate , and ( , ) is the so-called random coding exponent.…”
Section: Proof the Described Communication Is Just A Transmission Ofmentioning
confidence: 99%