2022
DOI: 10.1109/tit.2021.3137057
|View full text |Cite
|
Sign up to set email alerts
|

Universal Decoding for the Typical Random Code and for the Expurgated Code

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 27 publications
0
10
0
Order By: Relevance
“…which is the stochastic version of the well-known universal maximum mutual information (MMI) decoder [24], which has been recently proven to be universal in a typical error exponent sense [25]. The MMI decoder is approached by letting β → ∞ in (10) .…”
Section: Preliminariesmentioning
confidence: 99%
See 1 more Smart Citation
“…which is the stochastic version of the well-known universal maximum mutual information (MMI) decoder [24], which has been recently proven to be universal in a typical error exponent sense [25]. The MMI decoder is approached by letting β → ∞ in (10) .…”
Section: Preliminariesmentioning
confidence: 99%
“…By studying the behavior of both tails, work in [15] proves concentration in probability. The TRC was shown to be universally achievable with a likelihood mutual-information decoder in [17]. For pairwise-independent ensembles and arbitrary channels, Cocco et al showed in [18] that the probability that a code in the ensemble has an exponent smaller than a lower bound on the TRC exponent is vanishingly small.…”
Section: Introductionmentioning
confidence: 99%
“…Later, Merhav has studied error exponents of TRCs for the colored Gaussian channel [22], typical random trellis codes [23], and has derived a Lagrange-dual lower bound to the TRC exponent [24]. Recently, Tamir et al have studied the large deviations behavior around the TRC exponent [35], error exponents of typical random Slepian-Wolf codes [33], and universal decoding for the TRC exponent [34]. More interesting concentration results of the logarithmic error probability of random codes around the error exponent of the TRC have been reported in [36].…”
Section: Introductionmentioning
confidence: 99%
“…For the above described code construction of variable-rate binning (that depends on the empirical distribution of the SI), we prove that at least for the random coding error exponent, the penalized MMI decoder is actually optimal among all metrics that depend both on the joint empirical distribution of the codeword and the channel output sequence as well as on the SI possible type 4 . Due to recent findings in [34],…”
Section: Introductionmentioning
confidence: 99%
“…The concentration properties of the error exponent of randomly generated codes is studied in [14]. Tamir and Merhav [15] have shown that the typical error exponent can be achieved by a universal decoder ignorant of the channel law. Specifically, they have shown that a stochastic decoder based on the empirical mutual information between the received sequence and each codeword attains the typical error exponent.…”
mentioning
confidence: 99%