2015
DOI: 10.1109/tit.2015.2432094
|View full text |Cite
|
Sign up to set email alerts
|

A Randomized Algorithm for the Capacity of Finite-State Channels

Abstract: Inspired by ideas from the field of stochastic approximation, we propose a randomized algorithm to compute the capacity of a finite-state channel with a Markovian input. When the mutual information rate of the channel is concave with respect to the chosen parameterization, the proposed algorithm proves to be convergent to the capacity of the channel almost surely with the derived convergence rate. We also discuss the convergence behavior of the algorithm without the concavity assumption. Index Terms-Finite-sta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
3

Relationship

4
3

Authors

Journals

citations
Cited by 22 publications
(27 citation statements)
references
References 57 publications
0
27
0
Order By: Relevance
“…On the other hand, as elaborated in [29], such a desired property, albeit established for a few special cases [21,29], is not true in general. The concavity established in the previous section allows us to numerically compute C (1) (S 0 , ε) using the algorithm in [16]. The randomized algorithm proposed in [16] iteratively compute {θ n } in the following way: θ n+1 = θ n , if θ n + a n g n b (θ n ) ∈ [0, 1], θ n + a n g n b (θ n ), otherwise, where g n b (θ n ) is a simulator for I (X; Y ) (for details, see [16]).…”
Section: Numerical Evaluation Of C (1) (S 0 ε)mentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, as elaborated in [29], such a desired property, albeit established for a few special cases [21,29], is not true in general. The concavity established in the previous section allows us to numerically compute C (1) (S 0 , ε) using the algorithm in [16]. The randomized algorithm proposed in [16] iteratively compute {θ n } in the following way: θ n+1 = θ n , if θ n + a n g n b (θ n ) ∈ [0, 1], θ n + a n g n b (θ n ), otherwise, where g n b (θ n ) is a simulator for I (X; Y ) (for details, see [16]).…”
Section: Numerical Evaluation Of C (1) (S 0 ε)mentioning
confidence: 99%
“…The concavity established in the previous section allows us to numerically compute C (1) (S 0 , ε) using the algorithm in [16]. The randomized algorithm proposed in [16] iteratively compute {θ n } in the following way: θ n+1 = θ n , if θ n + a n g n b (θ n ) ∈ [0, 1], θ n + a n g n b (θ n ), otherwise, where g n b (θ n ) is a simulator for I (X; Y ) (for details, see [16]). The author shows that {θ n } converges to the first-order capacity-achieving distribution if I(X; Y ) is concave with respect to θ, which has been proven in Theorem 4.1.…”
Section: Numerical Evaluation Of C (1) (S 0 ε)mentioning
confidence: 99%
“…Taking advantage of the pathwise continuity of a Brownian motion, our sampling theorems, Theorems 2.1 and 2.3, naturally connect continuous-time Gaussian memory/feedback channels with their discrete-time counterparts, whose outputs are precisely sampled outputs of the original continuous-time Gaussian channel. In discrete time, the Shannon-McMillan-Breiman theorem provides an effective way to approximate the entropy rate of a stationary ergodic process, and numerical computation and optimization of mutual information of discrete-time channel using the Shannon-McMillan-Breiman theorem and its extensions have been extensively studied (see, e.g., [32,33] and references therein), which suggests our sampling theorems may well serve as a bridge to capitalize on relevant results in discrete time to numerically compute and optimize the mutual information of continuous-time Gaussian channels. In short, despite numerous technical barriers that one needs to overcome, we believe that in the long run the sampling theorems can help us in terms of numerically computing the mutual information and capacity of continuous-time Gaussian channels.…”
Section: Approximation Theoremsmentioning
confidence: 99%
“…if such a pair (i, j) exists and is unique; otherwise, an error is declared. Moreover, an error will be declared if the chosen codeword does not satisfy the power constraint in (33). Analysis of the probability of error: Now, for fixed T, ε > 0, define…”
Section: Now Define a Truncated Version Of Y As Followsmentioning
confidence: 99%
“…The presence of input and output memory in the channel, however, makes the problem extremely difficult: computing the capacity of channels with memory is a long open problem in information theory. One of the most effective strategies to attack such a difficult problem is the so-called Markov approximation scheme, which has been extensively exploited in the past decades for computing the capacity of families of finite-state channels (see [1,28,14] and references therein). Roughly speaking, the Markov approximation scheme says that, instead of maximizing the mutual information over general input processes, one can do so over Markovian input processes of order m to obtain the so-called m-th order Markov capacity.…”
Section: Introductionmentioning
confidence: 99%