For the vast majority of local graph problems standard dynamic programming techniques give c tw |V | O(1) algorithms, where tw is the treewidth of the input graph. On the other hand, for problems with a global requirement (usually connectivity) the best-known algorithms were naive dynamic programming schemes running in tw O(tw) |V | O(1) time.We breach this gap by introducing a technique we dubbed Cut&Count that allows to produce c tw |V | O(1) Monte Carlo algorithms for most connectivity-type problems, including HAMILTONIAN PATH, FEEDBACK VERTEX SET and CONNECTED DOMINATING SET, consequently answering the question raised by Lokshtanov, Marx and Saurabh [SODA'11] in a surprising way. We also show that (under reasonable complexity assumptions) the gap cannot be breached for some problems for which Cut&Count does not work, like CYCLE PACKING.The constant c we obtain is in all cases small (at most 4 for undirected problems and at most 6 for directed ones), and in several cases we are able to show that improving those constants would cause the Strong Exponential Time Hypothesis to fail.Our results have numerous consequences in various fields, like FPT algorithms, exact and approximate algorithms on planar and H-minor-free graphs and algorithms on graphs of bounded degree. In all these fields we are able to improve the best-known results for some problems.
Abstract.It is well known that many local graph problems, like Vertex Cover and Dominating Set, can be solved in 2 O(tw) n O(1) time for graphs with a given tree decomposition of width tw. However, for nonlocal problems, like the fundamental class of connectivity problems, for a long time it was unknown how to do this faster than tw O(tw) The rank-based approach introduces a new technique to speed up dynamic programming algorithms which is likely to have more applications. The determinant-based approach uses the Matrix Tree Theorem for deriving closed formulas for counting versions of connectivity problems; we show how to evaluate those formulas via dynamic programming.
The field of exact exponential time algorithms for NP-hard problems has thrived over the last decade. While exhaustive search remains asymptotically the fastest known algorithm for some basic problems, difficult and non-trivial exponential time algorithms have been found for a myriad of problems, including Graph Coloring, Hamiltonian Path, Dominating Set and 3-CNF-Sat. In some instances, improving these algorithms further seems to be out of reach. The CNF-Sat problem is the canonical example of a problem for which the trivial exhaustive search algorithm runs in time O(2 n ), where n is the number of variables in the input formula. While there exist non-trivial algorithms for CNF-Sat that run in time o(2 n ), no algorithm was able to improve the growth rate 2 to a smaller constant, and hence it is natural to conjecture that 2 is the optimal growth rate. The strong exponential time hypothesis (SETH) by Impagliazzo and Paturi [JCSS 2001] goes a little bit further and asserts that, for every < 1, there is a (large) integer k such that k-CNF-Sat cannot be computed in time 2 n . In this paper, we show that, for every < 1, the problems Hitting Set, Set Splitting, and NAE-Sat cannot be computed in time O(2 n ) unless SETH fails. Here n is the number of elements or variables in the input. For these problems, we actually get an equivalence to SETH in a certain sense. We conjecture that SETH implies a similar statement for Set Cover, and prove that, under this assumption, the fastest known algorithms for Steiner Tree, Connected Vertex Cover, Set Partitioning, and the pseudo-polynomial time algorithm for Subset Sum cannot be significantly improved. Finally, we justify our assumption about the hardness of Set Cover by showing that the parity of the number of solutions to Set Cover cannot be computed in time O(2 n ) for any < 1 unless SETH fails.
The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomialtime algorithms. The OV-conjecture in moderate dimension states there is no ε > 0 for which an O(N 2−ε ) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D-dimensional binary vectors.We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε > 0 such that: ACM Subject Classification Theory of computation → Problems, reductions and completenessMore Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture decide the satisfiability of bounded-width CNF formulas. SETH is used in the study of exact and fixed parameter tractable algorithms, see e.g [23,46] or the book by Cygan et al. [24]. In this area, it implies, among other things, tight lower bounds for problems on graphs that have small treewidth or pathwidth [41,26,25].Closely related to SETH, the orthogonal vectors problem (OV) is, given two sets A and B of N vectors from {0, 1} D , to decide whether there are vectors a ∈ A and b ∈ B such that a and b are orthogonal in Z D . If D ≤ O(N 0.3 ) holds, the problem can be solved in timeÕ(N 2 ) using an algorithm based on fast rectangular matrix multiplication (see e.g. [31]). SETH implies [54] that this algorithm is essentially as fast as possible; in particular, SETH implies the following hardness conjecture, which was given its name by Gao et al. [32].Conjecture 1.1 (Moderate-dimension OV Conjecture). There are no reals ε, δ > 0 such that OV for D = N δ can be solved 1 in time O(N 2−ε ).The moderate-dimension OV conjecture is used to study the fine-grained complexity of problems in P, for which it has remarkably strong and diverse implications. If the conjecture is true, then dozens of important problems from all across computer science exhibit running time lower bounds that match existing upper bounds up to subpolynomial factors. These include pattern matching and other problems in bioinformatics [7, 10, 40, 1], graph algorithms [47,6,32], computational geometry [16], formal languages [11,18], time-series analysis [2,19], and even economics [42] (see [58] for a more comprehensive list).Gao et al.[32] also named the low-dimension OV conjecture, which asserts that OV does not have subquadratic algorithms whenever D = ω(log N ) holds. The low-dimension implies the moderate-dimension variant of the OV conjecture, and both are implied by SETH [54]. Recent results on the hardness of approximation problems, such as Maximum Inner Product [5], rely on the stronger conjecture (perhaps also [12,14]). However, for the vast majority of OV-based hardness results, reducing the dimension only affects lower-order terms in the lo...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.