This paper studies the zero-delay source-channel coding problem, and specifically the problem of obtaining the vector transformations that optimally map between the m-dimensional source space and k-dimensional channel space, under a given transmission power constraint and for the mean square error distortion. The functional properties of the cost are studied and the necessary conditions for the optimality of the encoder and decoder mappings are derived. An optimization algorithm that imposes these conditions iteratively, in conjunction with the noisy channel relaxation method to mitigate poor local minima, is proposed. The numerical results show strict improvement over prior methods. The numerical approach is extended to the scenario of source-channel coding with decoder side information. The resulting encoding mappings are shown to be continuous relatives of, and in fact subsume as special case, the Wyner-Ziv mappings encountered in digital distributed source coding systems. A well-known result in information theory pertains to the linearity of optimal encoding and decoding mappings in the scalar Gaussian source and channel setting, at all channel signal-to-noise ratios (CSNRs). In this paper, the linearity of optimal coding, beyond the Gaussian source and channel, is considered and the necessary and sufficient condition for linearity of optimal mappings, given a noise (or source) distribution, and a specified a total power constraint are derived. It is shown that the Gaussian source-channel pair is unique in the sense that it is the only source-channel pair for which the optimal mappings are linear at more than one CSNR values. Moreover, the asymptotic linearity of optimal mappings is shown for low CSNR if the channel is Gaussian regardless of the source and, at the other extreme, for high CSNR if the source is Gaussian, regardless of the channel. The extension to the vector settings is also considered where besides the conditions inherited from the scalar case, additional constraints must be satisfied to ensure linearity of the optimal mappings.
Abstract-This paper analyzes the information disclosure problems originated in economics through the lens of information theory. Such problems are radically different from the conventional communication paradigms in information theory since they involve different objectives for the encoder and the decoder, which are aware of this mismatch and act accordingly. This leads, in our setting, to a hierarchical communication game, where the transmitter announces an encoding strategy with full commitment, and its distortion measure depends on a private information sequence whose realization is available at the transmitter. The receiver decides on its decoding strategy that minimizes its own distortion based on the announced encoding map and the statistics. Three problem settings are considered, focusing on the quadratic distortion measures, and jointly Gaussian source and private information: compression, communication, and the simple equilibrium conditions without any compression or communication. The equilibrium strategies and associated costs are characterized. The analysis is then extended to the receiver side information setting and the major changes in structure of optimal strategies are identified. Finally, an extension of the results to the broader context of decentralized stochastic control is presented.
The two most prevalent notions of common information (CI) are due to Wyner and Gács-Körner and both the notions can be stated as two different characteristic points in the lossless Gray-Wyner region. Although the information theoretic characterizations for these two CI quantities can be easily evaluated for random variables with infinite entropy (eg., continuous random variables), their operational significance is applicable only to the lossless framework. The primary objective of this paper is to generalize these two CI notions to the lossy Gray-Wyner network, which hence extends the theoretical foundation to general sources and distortion measures. We begin by deriving a single letter characterization for the lossy generalization of Wyner's CI, defined as the minimum rate on the shared branch of the Gray-Wyner network, maintaining minimum sum transmit rate when the two decoders reconstruct the sources subject to individual distortion constraints. To demonstrate its use, we compute the CI of bivariate Gaussian random variables for the entire regime of distortions. We then similarly generalize Gács and Körner's definition to the lossy framework. The latter half of the paper focuses on studying the tradeoff between the total transmit rate and receive rate in the Gray-Wyner network. We show that this tradeoff yields a contour of points on the surface of the Gray-Wyner region, which passes through both the Wyner and Gács-Körner operating points, and thereby provides a unified framework to understand the different notions of CI. We further show that this tradeoff generalizes the two notions of CI to the excess sum transmit rate and receive rate regimes, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.