In this paper we provide an O(nd + d 3 ) time randomized algorithm for solving linear programs with d variables and n constraints with high probability. To obtain this result we provide a robust, primal-dual O( √ d)-iteration interior point method inspired by the methods of Lee and Sidford (2014, 2019) and show how to efficiently implement this method using new data-structures based on heavy-hitters, the Johnson-Lindenstrauss lemma, and inverse maintenance. Interestingly, we obtain this running time without using fast matrix multiplication and consequently, barring a major advance in linear system solving, our running time is near optimal for solving dense linear programs among algorithms that don't use fast matrix multiplication.
Interior point algorithms for solving linear programs have been studied extensively for a long time [e.g. Karmarkar 1984; Lee, Sidford FOCS'14; Cohen, Lee, Song STOC'19]. For linear programs of the form min Ax=b,x≥0 c ⊤ x with n variables and d constraints, the generic case d = Ω(n) has recently been settled by Cohen, Lee and Song [STOC'19]. Their algorithm can solve linear programs in O(n ω log(n/δ)) expected time 1 , where δ is the relative accuracy. This is essentially optimal as all known linear system solvers require up to O(n ω ) time for solving Ax = b. However, for the case of deterministic solvers, the best upper bound is Vaidya's 30 years old O(n 2.5 log(n/δ)) bound [FOCS'89]. In this paper we show that one can also settle the deterministic setting by derandomizing Cohen et al.'s O(n ω log(n/δ)) time algorithm. This allows for a strict O(n ω log(n/δ)) time bound, instead of an expected one, and a simplified analysis, reducing the length of their proof of their central path method by roughly half. Derandomizing this algorithm was also an open question asked in Song's PhD Thesis.The main tool to achieve our result is a new data-structure that can maintain the solution to a linear system in subquadratic time. More accurately we are able to maintain √ U A ⊤ (AU A ⊤ ) −1 A √ U v in subquadratic time under ℓ 2 multiplicative changes to the diagonal matrix U and the vector v. This type of change is common for interior point algorithms. Previous algorithms [e.g. Vaidya STOC'89; Lee, Sidford FOCS'15; Cohen, Lee, Song STOC'19] required Ω(n 2 ) time for this task. In [Cohen, Lee, Song STOC'19] they managed to maintain the matrix √ U A ⊤ (AU A ⊤ ) −1 A √ U in subquadratic time, but multiplying it with a dense vector to solve the linear system still required Ω(n 2 ) time. To improve the complexity of their linear program solver, they restricted the solver to only multiply sparse vectors via a random sampling argument. In comparison, our data-structure maintains the entire productadditionally to just the matrix. Interestingly, this can be viewed as a simple modification of Cohen et al.'s data-structure, but it significantly simplifies their analysis of their central path method and makes their whole algorithm deterministic.1 Here O hides polylog(n) factors and O(n ω ) is the time required to multiply two n × n matrices. The stated O(n ω log(n/δ)) bound holds for the current bound on ω with ω ≈ 2.38 [V. Williams, STOC'12; Le Gall, ISSAC'14]. The upper bound for the solver will become larger than O(n ω log(n/δ)), if ω < 2 + 1/6.
The dynamic matrix inverse problem is to maintain the inverse of a matrix undergoing element and column updates. It is the main subroutine behind the best algorithms for many dynamic problems whose complexity is not yet well-understood, such as maintaining the largest eigenvalue, rank and determinant of a matrix and maintaining reachability, distances, maximum matching size, and k-paths/cycles in a graph. Understanding the complexity of dynamic matrix inverse is a key to understand these problems.In this paper, we present (i) improved algorithms for dynamic matrix inverse and their extensions to some incremental/look-ahead variants, and (ii) variants of the Online Matrix-Vector conjecture [Henzinger et al. STOC'15] that, if true, imply that these algorithms are tight. Our algorithms automatically lead to faster dynamic algorithms for the aforementioned problems, some of which are also tight under our conjectures, e.g. reachability and maximum matching size (closing the gaps for these two problems was in fact asked by Abboud and V.
In this paper we provide new randomized algorithms with improved runtimes for solving linear programs with two-sided constraints. In the special case of the minimum cost flow problem on n-vertex m-edge graphs with integer polynomially-bounded costs and capacities we obtain a randomized method which solves the problem in O(m+n 1.5 ) time. This improves upon the previous best runtime of O(m √ n) (Lee-Sidford 2014) and, in the special case of unit-capacity maximum flow, improves upon the previous best runtimes of m 4/3+o(1) (Liu-Sidford 2020, Kathuria 2020) and O(m √ n) (Lee-Sidford 2014) for sufficiently dense graphs. For ℓ 1 -regression in a matrix with n-columns and m-rows we obtain a randomized method which computes an ǫ-approximate solution in O(mn + n 2.5 ) time. This yields a randomized method which computes an ǫ-optimal policy of a discounted Markov Decision Process with S states and A actions per state in time O(S 2 A + S 2.5 ). These methods improve upon the previous best runtimes of methods which depend polylogarithmically on problem parameters, which were O(mn 1.5 ) (Lee-Sidford 2015) and O(S 2.5 A) (Lee-Sidford 2014, Sidford-Wang-Wu-Ye 2018).To obtain this result we introduce two new algorithmic tools of independent interest. First, we design a new general interior point method for solving linear programs with two sided constraints which combines techniques from (Lee-Song-Zhang 2019 to obtain a robust stochastic method with iteration count nearly the square root of the smaller dimension. Second, to implement this method we provide dynamic data structures for efficiently maintaining approximations to variants of Lewis-weights, a fundamental importance measure for matrices which generalize leverage scores and effective resistances.
In the sensitive distance oracle problem, there are three phases. We first preprocess a given directed graph G with n nodes and integer weights from [−W, W ]. Second, given a single batch of f edge insertions and deletions, we update the data structure. Third, given a query pair of nodes (u, v), return the distance from u to v. In the easier problem called sensitive reachability oracle problem, we only ask if there exists a directed path from u to v.Our first result is a sensitive distance oracle withÕ(W n ω+(3−ω)µ ) preprocessing time, O(W n 2−µ f 2 + W nf ω ) update time, andÕ(W n 2−µ f + W nf 2 ) query time where the parameter µ ∈ [0, 1] can be chosen. The data-structure requires O(W n 2+µ log n) bits of memory. This is the first algorithm that can handle f ≥ log n updates. Previous results (e.g. [Demetrescu et al. SICOMP'08; Bernstein and Karger SODA'08 and FOCS'09; Duan and Pettie SODA'09; Grandoni and Williams FOCS'12]) can handle at most 2 updates. When 3 ≤ f ≤ log n, the only non-trivial algorithm was by [Weimann and Yuster FOCS'10]. When W =Õ(1), our algorithm simultaneously improves their preprocessing time, update time, and query time.In particular, when f = ω(1), their update and query time is Ω(n 2−o(1) ), while our update and query time are truly subquadratic in n, i.e., ours is faster by a polynomial factor of n. To highlight the technique, ours is the first graph algorithm that exploits the kernel basis decomposition of polynomial matrices by [Jeannerod and Villard J.Comp'05; Zhou, Labahn and Storjohann J.Comp'15] developed in the symbolic computation community.As an easy observation from our technique, we obtain the first sensitive reachability oracle can handle f ≥ log n updates. Our algorithm has O(n ω ) preprocessing time, O(f ω ) update time, and O(f 2 ) query time. This data-structure requires O(n 2 log n) bits of memory. Efficient sensitive reachability oracles were asked in [Chechik, Cohen, Fiat, and Kaplan SODA'17]. Our algorithm can handle any constant number of updates in constant time. Previous algorithms with constant update and query time can handle only at most f ≤ 2 updates. Otherwise, there are non-trivial results for f ≤ log n, though, with query time Ω(n) by adapting [Baswana, Choudhary and Roditty STOC'16].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.