2016 Annual Conference on Information Science and Systems (CISS) 2016
DOI: 10.1109/ciss.2016.7460549
|View full text |Cite
|
Sign up to set email alerts
|

Caching Gaussians: Minimizing total correlation on the Gray-Wyner network

Abstract: We study a caching problem that resembles a lossy Gray-Wyner network: A source produces vector samples from a Gaussian distribution, but the user is interested in the samples of only one component. The encoder first sends a cache message without any knowledge of the user's preference. Upon learning her request, a second message is provided in the update phase so as to attain the desired fidelity on that component.The cache is efficient if it exploits as much of the correlation in the source as possible, which … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(18 citation statements)
references
References 9 publications
0
18
0
Order By: Relevance
“…. , k. This generalizes (9) for providing the informative k-dimensional representations about the common structure shared by X 1 , . .…”
Section: B the Informative K-dimensional Attributesmentioning
confidence: 95%
See 1 more Smart Citation
“…. , k. This generalizes (9) for providing the informative k-dimensional representations about the common structure shared by X 1 , . .…”
Section: B the Informative K-dimensional Attributesmentioning
confidence: 95%
“…This combines the knowledge from different domains, and offers a unified understanding for disciplines in information theory, statistics, and machine learning. We would also like to mention that the idea of studying the tradeoff between the total correlation and the common information rate was also employed in [9] [10] for Gaussian vectors in caching problems, while our works investigate this tradeoff for general discrete random variables. Moreover, the correlation explanation (CorEx) introduced by [11] also applied the total correlation as the information criterion to unsupervised learning.…”
mentioning
confidence: 99%
“…(8) Proof: It follows by time sharing between W = W * and W = (X 1 , X 2 ), and by setting V as the achiever of the conditional Wyner's common information C(X ′ 1 ; X ′ 2 |W ). By Proposition 2 and Lemma 2, the gap ∆ between the lower bound in (7) and the upper bound in (8) satisfies ∆ ≤ min{C, I(X, Y ; X ′ , Y ′ )}/4, if C(X; Y ) + C(X ′ ; Y ′ |W * ) ≤ C ≤ C * .…”
Section: Dynamic Caching Problemmentioning
confidence: 97%
“…. , Y r ) subject to constraints on r, the size of the state space [Op't Veld and Gastpar, 2016a]. This optimization can be written equivalently as follows.…”
Section: Extracting Common Informationmentioning
confidence: 99%
“…correlations lead to stronger weights), but this objective strongly prefers correlations that are nearly maximal, in which case the denominator becomes small and the weight becomes large. This optimization of T C(X|Y ) for continuous random variables X and Y is, to the best of our knowledge, the first tractable approach except for a special case discussed by [Op't Veld and Gastpar, 2016a]. Also note that although we used Σ, Λ in the derivation, the solution does not require us to calculate these computationally intensive quantities.…”
Section: T C(x|ymentioning
confidence: 99%