Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms 2020
DOI: 10.1137/1.9781611975994.54
|View full text |Cite
|
Sign up to set email alerts
|

Faster p-norm minimizing flows, via smoothed q-norm problems

Abstract: We present faster high-accuracy algorithms for computing ℓ p -norm minimizing flows. On a graph with m edges, our algorithm can compute a (1 + 1/poly(m))-approximate unweighted ℓ p -norm minimizing flow with pm 1+ 1 p−1 +o(1) operations, for any p ≥ 2, giving the best bound for all p 5.24. Combined with the algorithm from the work of Adil et al. (SODA '19), we can now compute such flows for any 2 ≤ p ≤ m o(1) in time at most O(m 1.24 ). In comparison, the previous best running time was Ω(m 1.33 ) for large con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(27 citation statements)
references
References 17 publications
1
26
0
Order By: Relevance
“…The inverse step-size L * 0 was chosen to be L * 0 = 1 initially and multiplied by 2 if the function value would increase due to too large steps (hence this was chosen adaptively in the beginning, but L * i was never decreased later on). As Figure 1 shows, empirically our method seems to be performing well, with high precision achieved after 50-80 gradient evaluations, and the convergence rate seems to be mostly unaffected by the dimension d. Hence in this random setting dual space preconditioning is indeed very efficient and competitive with previous works [17,1,3] which had dimension dependent convergence rates. We think that based on Proposition 4.6, it can be shown that with high probability, dimension-free convergence rates hold in this random scenario when the number of vectors n tends to infinity.…”
mentioning
confidence: 60%
See 1 more Smart Citation
“…The inverse step-size L * 0 was chosen to be L * 0 = 1 initially and multiplied by 2 if the function value would increase due to too large steps (hence this was chosen adaptively in the beginning, but L * i was never decreased later on). As Figure 1 shows, empirically our method seems to be performing well, with high precision achieved after 50-80 gradient evaluations, and the convergence rate seems to be mostly unaffected by the dimension d. Hence in this random setting dual space preconditioning is indeed very efficient and competitive with previous works [17,1,3] which had dimension dependent convergence rates. We think that based on Proposition 4.6, it can be shown that with high probability, dimension-free convergence rates hold in this random scenario when the number of vectors n tends to infinity.…”
mentioning
confidence: 60%
“…Then ∇f τ (x) = G I (x) + G J (x) + c. We have where A ∈ R n×d , d n, b ∈ R n , and p ≥ 1. This problem is a useful abstraction for some important graph problems, including Lipschitz learning on graphs [29] and pnorm minimizing flows [3]. Algorithms specialized for p-norm regression have recently been studied in the theoretical computer science literature by several authors (see, e.g., [17,1] and references therein).…”
Section: Org/page/termsmentioning
confidence: 99%
“…When p = 1 or p = ∞, this problem can be solved using linear programming. More generally, when p / ∈ {1, ∞}, the problem is nonlinear, and multiple approaches have been developed for solving it, including, e.g., a homotopy-based solver [Bubeck et al, 2018], solvers based on iterative refinement [Adil et al, 2019a, Adil andSachdeva, 2020], and solvers based on the classical method of iteratively reweighted least squares [Ene andVladu, 2019, Adil et al, 2019b]. Such solvers typically rely on fast linear system solves and attain logarithmic dependence on the inverse accuracy 1/ǫ, at the cost of iteration count scaling polynomially with one of the dimensions of A (typically the lower dimension, which is equal to the number of rows m), each iteration requiring a constant number of linear system solves.…”
Section: ℓ P Regressionmentioning
confidence: 99%
“…Their focus is on fast computation of the p-norm flows with respect to the proposed cost function and the constraints, either exactly or approximately, and use it to design fast algorithms for approximating maximum flow (see e.g. [AS20]), as well as an applications for graph clustering (see e.g. [FWY20]).…”
Section: Related Workmentioning
confidence: 99%
“…Various works give almost linear time algorithms for computing the d p -distance, both in the weighted and unweighted case, see e.g. [ABKS21; AS20] where they refer to it as p-norm flows. It is immediate that this definition yields a metric on V .…”
Section: Introductionmentioning
confidence: 99%