Given a source of iid samples of edges of an input graph G with n vertices and m edges, how many samples does one need to compute a constant factor approximation to the maximum matching size in G? Moreover, is it possible to obtain such an estimate in a small amount of space? We show that, on the one hand, this problem cannot be solved using a nontrivially sublinear (in m) number of samples: m 1−o(1) samples are needed. On the other hand, a surprisingly space efficient algorithm for processing the samples exists: O(log 2 n) bits of space suffice to compute an estimate.Our main technical tool is a new peeling type algorithm for matching that we simulate using a recursive sampling process that crucially ensures that local neighborhood information from 'dense' regions of the graph is provided at appropriately higher sampling rates. We show that a delicate balance between exploration depth and sampling rate allows our simulation to not lose precision over a logarithmic number of levels of recursion and achieve a constant factor approximation. The previous best result on matching size estimation from random samples was a log O(1) n approximation [Kapralov et al'14], which completely avoided such delicate trade-offs due to the approximation factor being much larger than exploration depth.Our algorithm also yields a constant factor approximate local computation algorithm (LCA) for matching with O(d log n) exploration starting from any vertex. Previous approaches were based on local simulations of randomized greedy, which take O(d) time in expectation over the starting vertex or edge (Yoshida et al'09, Onak et al'12), and could not achieve a better than d 2 runtime. Interestingly, we also show that unlike our algorithm, the local simulation of randomized greedy that is the basis of the most efficient prior results does take Ω(d 2 )O(d log n) time for a worst case edge even for d = exp(Θ( √ log n)).Remark 4. Note that here L(w) is a random variable independently sampled from the distribution of L(w), similarly to L a in Eq. (11).Remark 5. Here U (E) denotes the uniform distribution over E. Also we denote by v ∈ e the fact that e is adjacent to v, where v is a vertex and e is an edge. In this case we denote the other endpoint of e by e\v. We use this notation heavily throughout the analysis. ones where q is not in the middle, will be very simple.Case 2: r ≤ p ≤ q. Then,On the other hand, if r = ,by Fact 3 since ≤ 1/2.Case 5: p ≤ q ≤ r. We consider two subcases.(a.) p ≤ 4 . Then q cannot be greater than 8 . Indeed this would mean by Fact 4 thatwhich is a contradiction. Similarly, r cannot be greater than 16 . Indeed this would mean by Fact 4 that D KL (q r) > D KL (8 16 ) ≥ 2 , which is also a contradiction. Ultaminately, r ≤ 16 , so D KL (p r) ≤ D KL (0 16 ) ≤ 32by Fact 3 since 16 ≤ 1/2.
Cut and spectral sparsification of graphs have numerous applications, including e.g. speeding up algorithms for cuts and Laplacian solvers. These powerful notions have recently been extended to hypergraphs, which are much richer and may offer new applications. However, the current bounds on the size of hypergraph sparsifiers are not as tight as the corresponding bounds for graphs.Our first result is a polynomial-time algorithm that, given a hypergraph on n vertices with maximum hyperedge size r, outputs an ǫ-spectral sparsifier with O * (nr) hyperedges, where O * suppresses (ǫ −1 log n) O(1) factors. This size bound improves the two previous bounds: O * (n 3 ) [Soma and Yoshida, SODA'19] and O * (nr 3 ) [Bansal, Svensson and Trevisan, FOCS'19]. Our main technical tool is a new method for proving concentration of the nonlinear analogue of the quadratic form of the Laplacians for hypergraph expanders.We complement this with lower bounds on the bit complexity of any compression scheme that (1 + ǫ)-approximates all the cuts in a given hypergraph, and hence also on the bit complexity of every ǫ-cut/spectral sparsifier. These lower bounds are based on Ruzsa-Szemerédi graphs, and a particular instantiation yields an Ω(nr) lower bound on the bit complexity even for fixed constant ǫ. This is tight up to polylogarithmic factors in n, due to recent hypergraph cut sparsifiers of [Chen, Khanna and Nagda, FOCS'20].Finally, for directed hypergraphs, we present an algorithm that computes an ǫ-spectral sparsifier with O * (n 2 r 3 ) hyperarcs, where r is the maximum size of a hyperarc. For small r, this improves over O * (n 3 ) known from [Soma and Yoshida, SODA'19], and is getting close to the trivial lower bound of Ω(n 2 ) hyperarcs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.