In recent years, a new "fine-grained" theory of computational hardness has been developed, based on "fine-grained reductions" that focus on exact running times for problems. Mimicking NP-hardness, the approach is to (1) select a key problem X that for some function t, is conjectured to not be solvable by any O(t(n) 1−ε ) time algorithm for ε > 0, and (2) reduce X in a fine-grained way to many important problems, thus giving tight conditional time lower bounds for them. This approach has led to the discovery of many meaningful relationships between problems, and to equivalence classes.The main key problems used to base hardness on have been: the 3-SUM problem, the CNF-SAT problem (based on the Strong Exponential Time Hypothesis (SETH)) and the All Pairs Shortest Paths Problem. Research on SETH-based lower bounds has flourished in particular in recent years showing that the classical algorithms are optimal for problems such as Approximate Diameter, Edit Distance, Frechet Distance and Longest Common Subsequence.This paper surveys the current progress in this area, and highlights some exciting new developments.
IntroductionArguably the main goal of the theory of algorithms is to study the worst case time complexity of fundamental computational problems. When considering a problem P , we fix a computational model, such as a Random Access Machine (RAM) or a Turing machine (TM). Then we strive to develop an efficient algorithm that solves P and to prove that for a (hopefully slow growing) function t(n), the algorithm solves P on instances of size n in O(t(n)) time in that computational model. The gold standard for the running time t(n) is linear time, O(n); to solve most problems, one needs to at least read the input, and so linear time is necessary. The theory of algorithms has developed a wide variety of techniques. These have yielded near-linear time algorithms for many diverse problems. For instance, it is known since the 1960s and 70s (e.g. [143,144,145,99]) that Depth-First Search (DFS) and Breadth-First Search (BFS) run in linear time in graphs, and that using these techniques one can obtain linear time algorithms (on a RAM) for many interesting graph problems: Single-Source Shortest paths, Topological Sort of a Directed Acyclic Graph, Strongly Connected Components, Testing Graph Planarity etc. More recent work has shown that even more complex problems such as Approximate Max Flow, Maximum Bipartite Matching, Linear Systems on Structured Matrices, and many others, admit close to linear time algorithms, by combining combinatorial and linear algebraic techniques (see e.g. [140,64,141,116,117,67,68,65,66,113]).Nevertheless, for most problems of interest, the fastest known algorithms run much slower than linear time. This is perhaps not too surprising. Time hierarchy theorems show that for most computational models, for any computable function t(n) ≥ n, there exist problems that are solvable in O(t(n)) time but are NOT solvable in O(t(n) 1−ε ) time for ε > 0 (this was first proven for TMs [95], see [124] for more).Ti...