2015
DOI: 10.1007/s00224-015-9655-z
|View full text |Cite
|
Sign up to set email alerts
|

Information Lower Bounds via Self-Reducibility

Abstract: We use self-reduction methods to prove strong information lower bounds on two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product (IP). In our first result we affirm the conjecture that the information cost of GHD is linear even under the uniform distribution, which strengthens the (n) bound recently shown by Kerenidis et al. (2012), and answers an open problem from Chakrabarti et al. (2012). In our second result we prove that the information c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2017
2017
2018
2018

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…Call a distribution ν on X × Y optimal if IC( f , 0) = IC ν ( f , 0). Braverman et al [6] showed that IC ν ( f , 0) is continuous in ν, and this implies that optimal distributions exist, and moreover the set of optimal distributions is closed. It is also convex, due to the concavity of IC ν ( f , 0) (see [5]).…”
Section: (497)mentioning
confidence: 99%
See 3 more Smart Citations
“…Call a distribution ν on X × Y optimal if IC( f , 0) = IC ν ( f , 0). Braverman et al [6] showed that IC ν ( f , 0) is continuous in ν, and this implies that optimal distributions exist, and moreover the set of optimal distributions is closed. It is also convex, due to the concavity of IC ν ( f , 0) (see [5]).…”
Section: (497)mentioning
confidence: 99%
“…Intuitively, an upper bound like Lemma 7.1 is essentially a compression result. Besides, as DISJ n has a self-reducible structure (see [6]), one can make use of this fact together with the Braverman-Rao [7] compression. A difficulty is that what we want to solve is [DISJ n , ε], that is, the error allowed is non-distributional, while the error unavoidably introduced in the compression phase is distributional.…”
Section: Proof Of Theorem 311mentioning
confidence: 99%
See 2 more Smart Citations