1971
DOI: 10.1214/aoms/1177693067
|View full text |Cite
|
Sign up to set email alerts
|

Asymptotically Optimal Tests for Finite Markov Chains

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

1988
1988
2018
2018

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(32 citation statements)
references
References 0 publications
0
32
0
Order By: Relevance
“…To any ergodic FSMC we associate the mutual information cost function (5) and define its capacity as (6) In the definitions (5) and (6), the terms and , respectively, denote the mutual information between and the pair when , and the conditional mutual information (see [8]) between and the pair given , where is an -valued random variable (r.v.) whose marginal distribution is given by the stationary measure , is an -valued r.v.…”
Section: B Capacity Of Ergodic Fsmcsmentioning
confidence: 99%
See 1 more Smart Citation
“…To any ergodic FSMC we associate the mutual information cost function (5) and define its capacity as (6) In the definitions (5) and (6), the terms and , respectively, denote the mutual information between and the pair when , and the conditional mutual information (see [8]) between and the pair given , where is an -valued random variable (r.v.) whose marginal distribution is given by the stationary measure , is an -valued r.v.…”
Section: B Capacity Of Ergodic Fsmcsmentioning
confidence: 99%
“…Using known results on binary hypothesis tests for irreducible Markov chains (see [5], [24] and [10, pp. 72-82]) it is possible to show that a decoder can be chosen in such a way that, asymptotically in , its typeerror probability achieves the exponent (recall (15)) while its type-error probability is vanishing.…”
Section: Binary Hypothesis Test For the Confirmation Phasementioning
confidence: 99%
“…Boza [14], Davisson, Longo, Sgarro [15], Natarajan [16], Csiszár, Cover, Choi [17], and Csiszár [18]. (Throughout this paper, the base of the logarithm is |Σ|.…”
Section: B Markov Typementioning
confidence: 99%
“…In [30] error exponent is computed for testing between two different Markov sources (without additive noise in the observations); for applications and extensions of this result in Markov source-coding see [31], [32], [33], [34]. Error exponents for HMMs are considered in [23] and [24], as detailed above.…”
Section: Introductionmentioning
confidence: 99%