We say an algorithm on n × n matrices with entries in [−M, M ] (or n-node graphs with edge weights from [−M, M ]) is truly subcubic if it runs in O(n 3−δ · poly(log M )) time for some δ > 0. We define a notion of subcubic reducibility, and show that many important problems on graphs and matrices solvable in O(n 3 ) time are equivalent under subcubic reductions. Namely, the following weighted problems either all have truly subcubic algorithms, or none of them do:• The all-pairs shortest paths problem on weighted digraphs (APSP).• Detecting if a weighted graph has a triangle of negative total edge weight.• Listing up to n 2.99 negative triangles in an edge-weighted graph.• Finding a minimum weight cycle in a graph of non-negative edge weights.• The replacement paths problem on weighted digraphs.• Finding the second shortest simple path between two nodes in a weighted digraph.• Checking whether a given matrix defines a metric.• Verifying the correctness of a matrix product over the (min, +)-semiring.Therefore, if APSP cannot be solved in n 3−ε time for any ε > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on subcubic algorithms for all-pairs path problems, since it now suffices to give appropriate subcubic triangle detection algorithms.Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR,AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Building on our techniques, we give two new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM. * This work originated while the authors were members of the Institute for Advanced Study, Princeton, NJ and visiting the Com-
A recent, active line of work achieves tight lower bounds for fundamental problems under the Strong Exponential Time Hypothesis (SETH). A celebrated result of Backurs and Indyk (STOC'15) proves that computing the Edit Distance of two sequences of length n in truly subquadratic O(n 2−ε) time, for some ε > 0, is impossible under SETH. The result was extended by follow-up works to simpler looking problems like finding the Longest Common Subsequence (LCS). SETH is a very strong assumption, asserting that even linear size CNF formulas cannot be analyzed for satisfiability with an exponential speedup over exhaustive search. We consider much safer assumptions, e.g. that such a speedup is impossible for SAT on more expressive representations, like subexponential-size NC circuits. Intuitively, this assumption is much more plausible: NC circuits can implement linear algebra and complex cryptographic primitives, while CNFs cannot even approximately compute an XOR of bits. Our main result is a surprising reduction from SAT on Branching Programs to fundamental problems in P like Edit Distance, LCS, and many others. Truly subquadratic algorithms for these problems therefore have far more remarkable consequences than merely faster CNF-SAT algorithms. For example, SAT on arbitrary o(n)-depth bounded fan-in
Fine-grained reductions have established equivalences between many core problems withÕ(n 3 )-time algorithms on n-node weighted graphs, such as Shortest Cycle, All-Pairs Shortest Paths (APSP), Radius, Replacement Paths, Second Shortest Paths, and so on. These problems also haveÕ(mn)-time algorithms on m-edge n-node weighted graphs, and such algorithms have wider applicability. Are these mn bounds optimal when m n 2 ?Starting from the hypothesis that the minimum weight (2 + 1)-Clique problem in edge weighted graphs requires n 2 +1−o(1) time, we prove that for all sparsities of the form m = Θ(n 1+1/ ), there is no O(n 2 + mn 1−ε ) time algorithm for ε > 0 for any of the below problems• Minimum Weight (2 + 1)-Cycle in a directed weighted graph, • Shortest Cycle in a directed weighted graph, • APSP in a directed or undirected weighted graph, • Radius (or Eccentricities) in a directed or undirected weighted graph, • Wiener index of a directed or undirected weighted graph, • Replacement Paths in a directed weighted graph, • Second Shortest Path in a directed weighted graph, • Betweenness Centrality of a given node in a directed weighted graph. That is, we prove hardness for a variety of sparse graph problems from the hardness of a dense graph problem. Our results also lead to new conditional lower bounds from several related hypothesis for unweighted sparse graph problems including k-cycle, shortest cycle, Radius, Wiener index and APSP. * andreali@mit.edu. Supported by the EECS Merrill Lynch Fellowship.†
Properties definable in first-order logic are algorithmically interesting for both theoretical and pragmatic reasons. Many of the most studied algorithmic problems, such as Hitting Set and Orthogonal Vectors, are first-order, and the first-order properties naturally arise as relational database queries. A relatively straightforward algorithm for evaluating a property with k + 1 quantifiers takes time O (m k) and, assuming the Strong Exponential Time Hypothesis (SETH), some such properties require O (m k−ϵ) time for any ϵ > 0. (Here, m represents the size of the input structure, i.e., the number of tuples in all relations.) We give algorithms for every first-order property that improves this upper bound to m k /2 Θ(√ log n) , i.e., an improvement by a factor more than any poly-log, but less than the polynomial required to refute SETH. Moreover, we show that further improvement is equivalent to improving algorithms for sparse instances of the well-studied Orthogonal Vectors problem. Surprisingly, both results are obtained by showing completeness of the Sparse Orthogonal Vectors problem for the class of first-order properties under fine-grained reductions. To obtain improved algorithms, we apply the fast Orthogonal Vectors algorithm of References [3, 16]. While fine-grained reductions (reductions that closely preserve the conjectured complexities of problems) have been used to relate the hardness of disparate specific problems both within P and beyond, this is the first such completeness result for a standard complexity class.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.