2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) 2017
DOI: 10.1109/focs.2017.87
|View full text |Cite
|
Sign up to set email alerts
|

Much Faster Algorithms for Matrix Scaling

Abstract: We develop several efficient algorithms for the classical Matrix Scaling problem, which is used in many diverse areas, from preconditioning linear systems to approximation of the permanent. On an input n × n matrix A, this problem asks to find diagonal (scaling) matrices X and Y (if they exist), so that XAY ε-approximates a doubly stochastic matrix, or more generally a matrix with prescribed row and column sums.We address the general scaling problem as well as some important special cases. In particular, if A … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
126
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 69 publications
(128 citation statements)
references
References 27 publications
2
126
0
Order By: Relevance
“…Our results imply that the continuous operator scaling algorithm can be used to find a fractional perfect matching in an almost regular bipartite expander graph. We remark that our results also imply that the second-order methods for matrix scaling in [13,2] are near linear time algorithms for the instances in Corollary 4.6. This is because the condition number κ of the scaling solution for those instances is a constant by Theorem 1.7 and the algorithms in [13,2] have time complexity O(|E| log κ).…”
Section: Bipartite Matchingsupporting
confidence: 65%
See 1 more Smart Citation
“…Our results imply that the continuous operator scaling algorithm can be used to find a fractional perfect matching in an almost regular bipartite expander graph. We remark that our results also imply that the second-order methods for matrix scaling in [13,2] are near linear time algorithms for the instances in Corollary 4.6. This is because the condition number κ of the scaling solution for those instances is a constant by Theorem 1.7 and the algorithms in [13,2] have time complexity O(|E| log κ).…”
Section: Bipartite Matchingsupporting
confidence: 65%
“…The dependency on n in these algorithms is at least Ω(n 7/2 ) even for sparse matrices. Recently, two independent groups [13,2] developed a fast second order method for matrix scaling, and this method is extended to geodesic convex optimization in [1] for the operator scaling problem. Theorem 1.3 ([13, 2, 1]).…”
Section: Previous Algorithmsmentioning
confidence: 99%
“…Here we discuss our second order algorithm for Problem 1.10, the approximate norm minimization problem. As mentioned in Section 1.4, the paper [AZGL + 18] (following the algorithms developed in [AZLOW17,CMTV17] for the commutative Euclidean case) developed a second order polynomialtime algorithm for approximating the capacity for the simultaneous left-right action (Example 1.5) with running time polynomial in the bit description of the approximation parameter ε. In Section 5, we generalize this algorithm to arbitrary groups and actions (Algorithm 5.1).…”
Section: Second Order Methods: Structural Results and Algorithmsmentioning
confidence: 99%
“…Our second order method greatly generalizes the one used for the particular group action corresponding to operator scaling in [AZGL + 18]. It may be thought of as a geodesic analog of the "trust region method" [CGT00] or the "box-constrained Newton method" [CMTV17,AZLOW17] applied to a regularized function. For both methods, in this non-commutative setting, we recover the familiar convergence behavior of the classical commutative case: to achieve "proximity" ε to the optimum, our first order method converges in O(1/ε) iterations and our second order method in O(poly log(1/ε)) iterations.…”
Section: High-level Overviewmentioning
confidence: 99%
“…The seminal work of Spielman and Teng [ST04] gave the first nearly-linear time algorithm for solving the weighted ℓ 2 version to highaccuracy (a (1+ε)-approximate solution in time O(m·log 1 ε ) 1 ). The work of Spielman-Teng and the several followup-works have led to the fastest algorithms for maximum matching [Mad13], shortest paths with negative weights [Coh+17b], graph partitioning [OSV12], sampling random spanning trees [KM09; MST15;Sch18], matrix scaling [Coh+17a; All+17], and resulted in dramatic progress on the problem of computing maximum flows.…”
Section: Introductionmentioning
confidence: 99%