1999
DOI: 10.1109/18.746775
|View full text |Cite
|
Sign up to set email alerts
|

Multiterminal source coding with high resolution

Abstract: We consider separate encoding and joint decoding of correlated continuous information sources, subject to a difference distortion measure. We first derive a multiterminal extension of the Shannon lower bound for the rate region. Then we show that this Shannon outer bound is asymptotically tight for small distortions. These results imply that the loss in the sum of the coding rates due to the separation of the encoders vanishes in the limit of high resolution. Furthermore, lattice quantizers followed by Slepian… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
86
0

Year Published

2006
2006
2011
2011

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 93 publications
(87 citation statements)
references
References 27 publications
1
86
0
Order By: Relevance
“…This approach, with vector quantization performed on blocks of each of the sources, is optimal at all rates for jointly Gaussian sources and MSE distortion [4]. This approach is also optimal in the asymptotic regime of both large block length and high resolution [5]. The general lossy multiterminal source coding problem for large block length but finite rates, whether for discrete or continuous alphabet sources, is open.…”
Section: B Related Workmentioning
confidence: 99%
“…This approach, with vector quantization performed on blocks of each of the sources, is optimal at all rates for jointly Gaussian sources and MSE distortion [4]. This approach is also optimal in the asymptotic regime of both large block length and high resolution [5]. The general lossy multiterminal source coding problem for large block length but finite rates, whether for discrete or continuous alphabet sources, is open.…”
Section: B Related Workmentioning
confidence: 99%
“…Later, an upper bound on the rate loss due to the unavailability of the side information at the encoder was found in [5], which also proved that for power-difference distortion measures and smooth source probability distributions, this rate loss vanishes in the limit of small distortion. A similar high-resolution result was obtained in [13] for distributed coding of several sources without side information, also from an informationtheoretic perspective, that is, for arbitrarily large dimension. In [14] (unpublished), it was shown that tessellating quantizers followed by SlepianWolf coders are asymptotically optimal in the limit of small distortion and large dimension.…”
Section: Introductionmentioning
confidence: 51%
“…The nondistributed case was studied in [40][41][42], and [7,[43][44][45]8] analyzed the distributed case from an information-theoretic point of view. Using Gaussian statistics and Mean-Squared Error (MSE) as a distortion measure, [13] proved that distributed coding of two noisy observations without side information can be carried out with a performance close to that of joint coding and denoising, in the limit of small distortion and large dimension. Most of the operational work on distributed coding of noisy sources, that is, for a fixed dimension, deals with quantization design for a variety of settings [46][47][48][49], but does not consider the characterization of such quantizers at high rates or transforms.…”
Section: Introductionmentioning
confidence: 99%
“…Gish and Pierce [10] tell us that uniform quantizers followed by entropy encoders are nearly optimal in a singleterminal reproduction-oriented scenario, a (high-rate) result corroborated in [14] even in the vector quantization (VQ) case: quantization points form a lattice and even were collaborative quantization among the sources possible the optimal scheme would not use it. Placing this insight in a multiterminal setting, Zamir and Berger [32] show that uniform (actually lattice) quantizers followed by SW encoders are nearly optimal for reproduction purposes, in the high resolution regime 2 . The main question we address in this paper immediately arises:…”
Section: Introductionmentioning
confidence: 99%