2004
DOI: 10.1109/tit.2004.825040
|View full text |Cite
|
Sign up to set email alerts
|

CsiszÁr's Cutoff Rates for the General Hypothesis Testing Problem

Abstract: In [6], Csiszár established the concept of forward-cutoff rate for the error exponent hypothesis testing problem based on independent and identically distributed (i.i.d.) observations. Given 0, he defined the forward-cutoff rate as the number 0 that provides the best possible lower bound in the form () to the type 1 error exponent function for hypothesis testing where 0 is the rate of exponential convergence to 0 of the type 2 error probability. He then demonstrated that the forward-cutoff rate is given by (),… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2007
2007
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…In fact, as shown in [44, (22)], a binary alphabet suffices if there is a single constraint (i.e., L = 1) which is on the total variation distance. In view of (1), the same conclusion also holds when minimizing the Rényi divergence subject to a constraint on the total variation distance. To set notation, the divergences D(P Q), |P − Q|, H α (P Q), D α (P Q) are defined at the end of this section, being consistent with the notation in [35] and [45].…”
Section: Introductionmentioning
confidence: 62%
See 1 more Smart Citation
“…In fact, as shown in [44, (22)], a binary alphabet suffices if there is a single constraint (i.e., L = 1) which is on the total variation distance. In view of (1), the same conclusion also holds when minimizing the Rényi divergence subject to a constraint on the total variation distance. To set notation, the divergences D(P Q), |P − Q|, H α (P Q), D α (P Q) are defined at the end of this section, being consistent with the notation in [35] and [45].…”
Section: Introductionmentioning
confidence: 62%
“…The Rényi divergence, introduced in [30], has been studied so far in various informationtheoretic contexts (and it has been actually used before it had a name [37]). These include generalized cutoff rates and error exponents for hypothesis testing ( [1], [6], [38]), guessing moments ( [2], [9]), source and channel coding error exponents ( [2], [12], [22], [27], [37]), strong converse theorems for classes of networks [11], strong data processing theorems for discrete memoryless channels [28], bounds for joint source-channel coding [41], and one-shot bounds for information-theoretic problems [46].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, the original Jensen-Rényi divergence (He, Hamza, & Krim, 2003) as well as the identically named divergence (Kluza, 2019) used in this letter are non-f -divergence generalizations of the Jensen-Shannon divergence. Traditionally, Rényi's entropy and divergence have had applications in a wide range of problems, including lossless data compression (Campbell, 1965;Courtade & Verdú, 2014;Rached, Alajaji, & Campbell, 1999), hypothesis testing (Csiszár, 1995;Alajaji, Chen, & Rached, 2004), error probability (Ben-Bassat & Raviv, 2006), and guessing (Arikan, 1996;Verdú, 2015). Recently, the Rényi divergence and its variants (including Sibson's mutual information) were used to bound the generalization error in learning algorithms (Esposito, Gastpar, & Issa, 2020) and to analyze deep neural networks (DNNs) (Wickstrom et al, 2019), variational inference (Li & Turner, 2016), Bayesian neural networks (Li & Gal, 2017), and generalized learning vector quantization (Mwebaze et al, 2010).…”
Section: Prior Workmentioning
confidence: 99%
“…Moreover, the original Jensen-Rényi divergence [18] as well as the identically named divergence [24] used in this paper are non-f -divergence generalizations of the Jensen-Shannon divergence. Traditionally, Rényi's entropy and divergence have had applications in a wide range of problems, including lossless data compression [7], [9], hypothesis testing [12], [2], error probability [6], and guessing [4], [42]. Recently, the Rényi divergence and its variants (including Sibson's mutual information) were used to bound the generalization error in learning algorithms [13], and to analyze deep neural networks (DNNs) [45], variational inference [27], Bayesian neural networks [26], and generalized learning vector quantization [31].…”
Section: Introductionmentioning
confidence: 99%