We consider the problem of distributed lossy linear function computation in a tree network. We examine two cases: (i) data aggregation (only one sink node computes) and (ii) consensus (all nodes compute the same function). By quantifying the accumulation of information loss in distributed computing, we obtain fundamental limits on network computation rate as a function of incremental distortions (and hence incremental loss of information) along the edges of the network. The above characterization, based on quantifying distortion accumulation, offers an improvement over classical cut-set type techniques which are based on overall distortions instead of incremental distortions. This quantification of information loss qualitatively resembles information dissipation in cascaded channels [1]. Surprisingly, this accumulation effect of distortion happens even at infinite blocklength. Combining this observation with an inequality on the dominance of mean-square quantities over relative-entropy quantities, we obtain outer bounds on the rate distortion function that are tighter than classical cut-set bounds by a difference which can be arbitrarily large in both data aggregation and consensus. We also obtain inner bounds on the optimal rate using random Gaussian coding, which differ from the outer bounds by, where D is the overall distortion. The obtained inner and outer bounds can provide insights A preliminary version of this work was presented in part at the 53rd Annual Allerton Conference on Communication, Control and Computing, 2015.2 on rate (bit) allocations for both the data aggregation problem and the consensus problem. We show that for tree networks, the rate allocation results have a mathematical structure similar to classical reverse water-filling for parallel Gaussian sources.
I. INTRODUCTIONThe phenomenon of information dissipation [1]- [6] has been of increasing interest recently from an information-theoretic viewpoint. These results characterize and quantify the gradual loss of information as it is transmitted through cascaded noisy channels. This study has also yielded data processing inequalities that are stronger than those used classically [2], [5].The dissipation of information cannot be quantified easily using classical information-theoretic tools that rely on the law of large numbers, because the dissipation of information is often due to finite-length of codewords and power constraints on the channel inputs [1]. In many classical network information theory problems, such as relay networks, the dissipation of information is not observed because it can be suppressed by use of asymptotically infinite blocklengths [1],[7] 1 . However, information dissipation does happen in many problems of communications and computation. For example, in [4], Evans and Schulman obtain bounds on the information dissipation in noisy circuits, and in [1], Polyanskiy and Wu examine a similar problem in cascaded AWGN channels with power-constrained inputs. Our earlier works [8], [9] show that under some conditions, error-corre...