Proceedings. 1998 IEEE International Symposium on Information Theory (Cat. No.98CH36252)
DOI: 10.1109/isit.1998.708627
|View full text |Cite
|
Sign up to set email alerts
|

Low complexity sequential lossless coding for piecewise stationary memoryless sources

Abstract: Abstract-Three strongly sequential, lossless compression schemes, one with linearly growing per-letter computational complexity, and two with fixed per-letter complexity, are presented and analyzed for memoryless sources with abruptly changing statistics. The first method, which improves on Willems' weighting approach, asymptotically achieves a lower bound on the redundancy, and hence is optimal. The second scheme achieves redundancy of O (log N=N ) when the transitions in the statistics are large, and O (log … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(32 citation statements)
references
References 15 publications
0
32
0
Order By: Relevance
“…Even though the results in Merhav [45], Willems [46], and Shamir and Merhav [47] are for p.i.i.d. sources, it is easy to check that if (1) holds, then all of their results go through.…”
Section: E Coding For Piecewise-constant Parametersmentioning
confidence: 87%
See 1 more Smart Citation
“…Even though the results in Merhav [45], Willems [46], and Shamir and Merhav [47] are for p.i.i.d. sources, it is easy to check that if (1) holds, then all of their results go through.…”
Section: E Coding For Piecewise-constant Parametersmentioning
confidence: 87%
“…The space complexities of the two algorithms grow more slowly than the time complexities, which are and , respectively. In [47], Shamir and Merhav describe an algorithm giving…”
Section: E Coding For Piecewise-constant Parametersmentioning
confidence: 99%
“…The idea of slowly decreasing the switching rate was considered in [12] in the context of source coding, and later analysed for expert switching in [10]; we saw in Section 3.2 that it also underlies Follow the Leading History of [7]. It results in tracking regret bounds that are almost as good as the bounds for constant α with optimally tuned α.…”
Section: Theorem 4 the Worst-case Adaptive Regret Of Fixed Share Withmentioning
confidence: 99%
“…In this paper, we show that the performance weighting approach used in [1], [5] can be generalized to yield algorithms that can efficiently compete with the best partition over all partitions for more general loss functions. We demonstrate that the universal probability assignment introduced in [1], [5] can be effectively merged into the prediction framework by using the methodology introduced in [2].…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we show that the performance weighting approach used in [1], [5] can be generalized to yield algorithms that can efficiently compete with the best partition over all partitions for more general loss functions. We demonstrate that the universal probability assignment introduced in [1], [5] can be effectively merged into the prediction framework by using the methodology introduced in [2]. Although we investigate the continuous class of linear regressors or that of a finite number of adaptive filters as our competition class and use the square error loss, the methodology introduced in this paper can be extended to arbitrary competition classes, such as that of certain nonlinear regressors considered in [6], [7] or to more general loss functions as in [8].…”
Section: Introductionmentioning
confidence: 99%