A sender holds a word x consisting of n blocks x i , each of t bits, and wishes to broadcast a codeword to m receivers, R 1 , ..., R m . Each receiver R i is interested in one block, and has prior side information consisting of some subset of the other blocks. Let β t be the minimum number of bits that has to be transmitted when each block is of length t, and let β be the limit β = lim t→∞ β t /t. In words, β is the average communication cost per bit in each block (for long blocks). Finding the coding rate β, for such an informed broadcast setting, generalizes several coding theoretic parameters related to Informed Source Coding on Demand, Index Coding and Network Coding.In this work we show that usage of large data blocks may strictly improve upon the trivial encoding which treats each bit in the block independently. To this end, we provide general bounds on β t , and prove that for any constant C there is an explicit broadcast setting in which β = 2 but β 1 > C. One of these examples answers a question of [15].In addition, we provide examples with the following counterintuitive direct-sum phenomena. Consider a union of several mutually independent broadcast settings. The optimal code for the combined setting may yield a significant saving in communication over concatenating optimal encodings for the individual settings. This result also provides new non-linear coding schemes which improve upon the largest known gap between linear and non-linear Network Coding, thus improving the results of [8].The proofs are based on a relation between this problem and results in the study of Witsenhausen's rate, OR graph products, colorings of Cayley graphs, and the chromatic numbers of Kneser graphs.
The following source coding problem was introduced by Birk and Kol: a sender holds a word x ∈ {0, 1} n , and wishes to broadcast a codeword to n receivers, R 1 , . . . , R n . The receiver R i is interested in x i , and has prior side information comprising some subset of the n bits. This corresponds to a directed graph G on n vertices, where ij is an edge iff R i knows the bit x j . An index code for G is an encoding scheme which enables each R i to always reconstruct x i , given his side information. The minimal word length of an index code was studied by Bar-Yossef, Birk, Jayram and Kol [4]. They introduced a graph parameter, minrk 2 (G), which completely characterizes the length of an optimal linear index code for G. The authors of [4] showed that in various cases linear codes attain the optimal word length, and conjectured that linear index coding is in fact always optimal.In this work, we disprove the main conjecture of [4] in the following strong sense: for any ε > 0 and sufficiently large n, there is an n-vertex graph G so that every linear index code for G requires codewords of length at least n 1−ε , and yet a non-linear index code for G has a word length of n ε . This is achieved by an explicit construction, which extends Alon's variant of the celebrated Ramsey construction of Frankl and Wilson.In addition, we study optimal index codes in various, less restricted, natural models, and prove several related properties of the graph parameter minrk(G).
For a graph property P, the edit distance of a graph G from P, denoted E P (G), is the minimum number of edge modifications (additions or deletions) one needs to apply to G in order to turn it into a graph satisfying P. What is the furthest graph on n vertices from P and what is the largest possible edit distance from P? Denote this maximal distance by ed(n, P). This question is motivated by algorithmic edge-modification problems, in which one wishes to find or approximate the value of E P (G) given an input graph G.A monotone graph property is closed under removal of edges and vertices. Trivially, for any monotone property, the largest edit distance is attained by a complete graph. We show that this is a simple instance of a much broader phenomenon. A hereditary graph property is closed under removal of vertices. We prove that for any hereditary graph property P, a random graph with an edge density that depends on P essentially achieves the maximal distance from P, that is: ed(n, P) = E P (G(n, p(P))) + o(n 2 ) with high probability. The proofs combine several tools, including strengthened versions of the Szemerédi Regularity Lemma, properties of random graphs and probabilistic arguments.
We study the reconstruction of a stratified space from a possibly noisy point sample. Specifically, we use the vineyard of the distance function restricted to a 1-parameter family of neighborhoods of a point to assess the local homology of the stratified space at that point. We prove the correctness of this assessment under the assumption of a sufficiently dense sample. We also give an algorithm that constructs the vineyard and makes the local assessment in time at most cubic in the size of the Delaunay triangulation of the point sample.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.