In most notions of locality in error correcting codes-notably locally recoverable codes (LRCs) and locally decodable codes (LDCs)-a decoder seeks to learn a single symbol of a message while looking at only a few symbols of the corresponding codeword. However, suppose that one wants to recover r > 1 symbols of the message. The two extremes are repeating the single-query algorithm r times (this is the intuition behind LRCs with availability, primitive multiset batch codes, and PIR codes) or simply running a global decoding algorithm to recover the whole thing. In this paper, we investigate what can happen in between these two extremes: at what value of r does repetition stop being a good idea?In order to begin to study this question we introduce robust batch codes, which seek to find r symbols of the message using m queries to the codeword, in the presence of erasures. We focus on the case where r = m, which can be seen as a generalization of the MDS property. Surprisingly, we show that for this notion of locality, repetition is optimal even up to very large values of r = Ω(k).
Consider the following social choice problem. Suppose we have a set of n voters and m candidates that lie in a metric space. The goal is to design a mechanism to choose a candidate whose average distance to the voters is as small as possible. However, the mechanism does not get direct access to the metric space. Instead, it gets each voter's ordinal ranking of the candidates by distance. Given only this partial information, what is the smallest worst-case approximation ratio (known as the distortion) that a mechanism can guarantee?A simple example shows that no deterministic mechanism can guarantee distortion better than 3, and no randomized mechanism can guarantee distortion better than 2. It has been conjectured that both of these lower bounds are optimal, and recently, Gkatzelis, Halpern, and Shah proved this conjecture for deterministic mechanisms. We disprove the conjecture for randomized mechanisms for m ≥ 3 by constructing elections for which no randomized mechanism can guarantee distortion better than 2.0261 for m = 3, 2.0496 for m = 4, up to 2.1126 as m → ∞. We obtain our lower bounds by identifying a class of simple metrics that appear to capture much of the hardness of the problem, and we show that any randomized mechanism must have high distortion on one of these metrics. We provide a nearly matching upper bound for this restricted class of metrics as well. Finally, we conjecture that these bounds give the optimal distortion for every m, and provide a proof for m = 3, thereby resolving that case.
We study the complexity of computing majority as a composition of local functions:where each g j : {0, 1} n → {0, 1} is an arbitrary function that queries only k n variables and h : {0, 1} m → {0, 1} is an arbitrary combining function. We prove an optimal lower bound of
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.