No abstract
Abstract. We introduce two-sorted theories in the style of Cook and Nguyen for the complexity classes ⊕L and DET , whose complete problems include determinants over Z2 and Z, respectively. We then describe interpretations of Soltys' linear algebra theory LAp over arbitrary integral domains, into each of our new theories. The result shows equivalences of standard theorems of linear algebra over Z2 and Z can be proved in the corresponding theory, but leaves open the interesting question of whether the theorems themselves can be proved. IntroductionThis paper is a contribution to bounded reverse mathematics [Ngu08,CN10], that part of proof complexity concerned with determining the computational complexity of concepts needed to prove theorems of interest in computer science. We are specifically interested in theorems of linear algebra over finite fields and the integers. The relevant complexity classes for each case have been well-studied in the computational complexity literature. The classes are ⊕L and DET , associated with linear algebra over Z 2 and Z, respectively. We introduce formal theories V ⊕L and V #L for ⊕L and DET , each intended to capture reasoning in the corresponding class. Each theory allows induction over any relation in the associated complexity class, and the functions definable in each theory are exactly the functions in the class. In particular determinants and coefficients of the characteristic polynomial of a matrix can be defined.To study the question of which results from linear algebra can be proved in the theories we take advantage of Soltys's theory LAp [SK01, SC04] for formalizing linear algebra over an arbitrary field or integral domain. We present two interpretations of LAp: one into V ⊕L and one into V #L. Both interpretations translate theorems of LAp to theorems in the corresponding theory, but the meaning of the theorems differs in the two translations since the ring elements range over Z 2 in one and over Z in the other. From these interpretations and results in [SK01, SC04] we show that the theories prove some interesting properties of determinants, but leave open the question of whether the proofs of some basic theorems 1998 ACM Subject Classification: F.4.0.
Does the information complexity of a function equal its communication complexity? We examine whether any currently known techniques might be used to show a separation between the two notions. Recently, Ganor et al. provided such a separation in the distributional setting for a specific input distribution µ. We show that in the nondistributional setting, the relative discrepancy bound they defined is, in fact, smaller than the information complexity, and hence it cannot be used to separate information and communication complexity. In addition, in the distributional case, we provide an equivalent linear program formulation for relative discrepancy and relate it to variants of the partition bound, resolving also an open question regarding the relation of the partition bound and information complexity. Last, we prove the equivalence between the adaptive relative discrepancy and the public-coin partition bound, which implies that the logarithm of the adaptive relative discrepancy bound is quadratically tight with respect to communication.
Kushilevitz [1989] initiated the study of information-theoretic privacy within the context of communication complexity. Unfortunately, it has been shown that most interesting functions are not privately computable [Kushilevitz 1989, Brandt and Sandholm 2008]. The unattainability of perfect privacy for many functions motivated the study of approximate privacy . Feigenbaum et al. [2010a, 2010b] define notions of worst-case as well as average-case approximate privacy and present several interesting upper bounds as well as some open problems for further study. In this article, we obtain asymptotically tight bounds on the trade-offs between both the worst-case and average-case approximate privacy of protocols and their communication cost for Vickrey auctions. Further, we relate the notion of average-case approximate privacy to other measures based on information cost of protocols. This enables us to prove exponential lower bounds on the subjective approximate privacy of protocols for computing the Intersection function, independent of its communication cost. This proves a conjecture of Feigenbaum et al. [2010a].
Does the information complexity of a function equal its communication complexity? We examine whether any currently known techniques might be used to show a separation between the two notions. Ganor et al. recently provided such a separation in the distributional case for a specific input distribution. We show that in the non-distributional setting, the relative discrepancy bound is smaller than the information complexity, hence it cannot separate information and communication complexity. In addition, in the distributional case, we provide a linear program formulation for relative discrepancy and relate it to variants of the partition bound, resolving also an open question regarding the relation of the partition bound and information complexity. Last, we prove the equivalence between the adaptive relative discrepancy and the public-coin partition, implying that the logarithm of the adaptive relative discrepancy bound is quadratically tight with respect to communication. 41 instance. This has been shown to be true in the simultaneous and one-way models [5,6], for bounded-round two-way protocols under product distributions [3,7] or non-product distributions [1], and also for specific functions like Disjointness [8]; non-trivial direct sum theorems have also been shown for general two-way randomized communication complexity [9]. Since the information complexity is equal to amortized communication complexity [1], the question of whether information and communication complexity are equal is equivalent to whether communication complexity has a direct sum property [1,10]. Note that in the case of deterministic, zero-error protocols, a separation between information and communication complexity is known for Equality [10].Since information complexity deals with the information Alice and Bob transmit about their inputs, it is necessary to define a distribution on these inputs. For each fixed distribution μ, we define the distributional information complexity of a function f (also known as the information cost) as the information Alice and Bob transmit about their inputs in any protocol that solves f with small error according to μ [1,5]. The (non-distributional) information complexity of the function f is defined as its distributional information complexity for the worst distribution μ [10]. In this paper we consider the internal information complexity.Similarly, for communication complexity, one may also consider a model with a distribution μ over the inputs, and the error probability of the protocol is taken over this distribution. This is called a distributional model, and Yao's minmax principle [11] states that the randomized communication complexity of f is equal to its distributional communication complexity for the worst distribution μ, where the randomized communication complexity of a function f is defined as the minimum number of bits exchanged, in the worst case over the inputs, for a randomized protocol to compute the function with small error [12].One can therefore ask whether the following stronger relation ho...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.