The Gauss-Markov source produces Ui = aUi−1 + Zi for i ≥ 1, where U0 = 0, |a| < 1 and Zi ∼ N (0, σ 2 ) are i.i.d. Gaussian random variables. We consider lossy compression of a block of n samples of the Gauss-Markov source under squared error distortion. We obtain the Gaussian approximation for the Gauss-Markov source with excess-distortion criterion for any distortion d > 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n, d, ) approaches the rate-distortion function R(d)is the dispersion, ∈ (0, 1) is the excess-distortion probability, and Q −1 is the inverse Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 < d ≤ σ 2 (1+|a|) 2 , R(n, d, ) of the Gauss-Markov source coincides with that of Zi, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations.
We consider the problem of communication over a network containing a hidden and malicious adversary that can control a subset of network resources, and aims to disrupt communications. We focus on omniscient nodebased adversaries, i.e., the adversaries can control a subset of nodes, and know the message, network code and packets on all links. Characterizing information-theoretically optimal communication rates as a function of network parameters and bounds on the adversarially controlled network is in general open, even for unicast (single source, single destination) problems. In this work we characterize the information-theoretically optimal randomized capacity of such problems, i.e., under the assumption that the source node shares (an asymptotically negligible amount of) independent common randomness with each network node a priori (for instance, as part of network design). We propose a novel computationally-efficient communication scheme whose rate matches a natural informationtheoretically "erasure outer bound" on the optimal rate. Our schemes require no prior knowledge of network topology, and can be implemented in a distributed manner as an overlay on top of classical distributed linear network coding. I. INTRODUCTIONNetwork coding allows routers in networks to mix packets. This helps attain information-theoretically throughput for a variety of network communication problems; in particular for network multicast [1], [2], often via linear coding operations [3], [4]. Throughput-optimal network codes can be efficiently designed [5], and may even be implemented distributedly [6]. Also, network-coded communication is more robust to packet losses/link-failures [2], [4], [7].However, when the network contains malicious nodes/links, due to the mixing nature of network coding, even a single erroneous packet can cause all packets at the receivers being corrupted. This motivates the problem of network error correction, which was first studied by Cai and Yeung in [8], [9]. They considered an omniscient adversary capable of injecting errors on any z links, and showed that C − 2z was both an inner and outer bound on the optimal throughput, where C is the network-multicast min-cut. Jaggi et al. [10] proposed efficient network codes to achieve this rate. In parallel, Kötter and Kschischang [11] developed a different and elegant approach based on subspace/rank-metric codes to achieve the same rate. Furthermore, when the adversary is of "limited-view" in some manner (for instance, adversary can observe only a sufficiently small subset of transmissions, or is computationally bounded, or is "causal/cannot predict future transmissions"), a higher rate is possible, and in fact [10], [12], [13] proposed a suite of network codes that achieve C − z, all of which meet the network Hamming bound in [8]. A more refined adversary model is considered in [14].Although communication in the presence of link-based adversaries is now relatively well-understood, problems where the adversaries are "node-based" seem to be much more challenging. ...
This paper provides a precise error analysis for the maximum likelihood estimateâ(u u u) of the parameter a given samples u u u = (u1, . . . , un) drawn from a nonstationary Gauss-Markov process Ui = aUi−1 + Zi, i ≥ 1, where a > 1, U0 = 0, and Zi's are independent Gaussian random variables with zero mean and variance σ 2 . We show a tight nonasymptotic exponentially decaying bound on the tail probability of the estimation error. Unlike previous works, our bound is tight already for a sample size of the order of hundreds. We apply the new estimation bound to find the dispersion for lossy compression of nonstationary Gauss-Markov sources. We show that the dispersion is given by the same integral formula derived in our previous work [1] for the (asymptotically) stationary Gauss-Markov sources, i.e., |a| < 1. New ideas in the nonstationary case include a deeper understanding of the scaling of the maximum eigenvalue of the covariance matrix of the source sequence, and new techniques in the derivation of our estimation error bound.
The Gauss-Markov source produces Ui = aUi−1 + Zi for i ≥ 1, where U0 = 0, |a| < 1 and Zi ∼ N (0, σ 2) are i.i.d. Gaussian random variables. We consider lossy compression of a block of n samples of the Gauss-Markov source under squared error distortion. We obtain the Gaussian approximation for the Gauss-Markov source with excess-distortion criterion for any distortion d > 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n, d,) approaches the rate-distortion function R(d) as R(n, d,) = R(d) + V (d) n Q −1 () + o 1 √ n , where V (d) is the dispersion, ∈ (0, 1) is the excess-distortion probability, and Q −1 is the inverse of the Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 < d ≤ σ 2 (1+|a|) 2 , R(n, d,) of the Gauss-Markov source coincides with that of Zi, the i.i.d. Gaussian noise driving the process (whose dispersion is 1/2), up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.