International audienceWe analyze several quadratic-time solvable problems, and we show that these problems are not solvable in truly subquadratic time (that is, in time O(n2−ϵ) for some ϵ>0), unless the well known Strong Exponential Time Hypothesis (in short, SETH) is false. In particular, we start from an artificial quadratic-time solvable variation of the k-Sat problem (already introduced and used in the literature) and we will construct a web of Karp reductions, proving that a truly subquadratic-time algorithm for any of the problems in the web falsifies SETH. Some of these results were already known, while others are, as far as we know, new. The new problems considered are: computing the betweenness centrality of a vertex (the same result was proved independently by Abboud et al.), computing the minimum closeness centrality in a graph, computing the hyperbolicity of a graph, and computing the subset graph of a collection of sets. On the other hand, we will show that testing if a directed graph is transitive and testing if a graph is a comparability graph are subquadratic-time solvable (our algorithm is practical, since it is not based on intricate matrix multiplication algorithms)
International audienceThe (Gromov) hyperbolicity is a topological property of a graph, which has been recently applied in several different contexts, such as the design of routing schemes, network security, computational biology, the analysis of graph algorithms, and the classification of complex networks. Computing the hyperbolicity of a graph can be very time consuming: indeed, the best available algorithm has running-time O(n^{3.69}), which is clearly prohibitive for big graphs. In this paper, we provide a new and more efficient algorithm: although its worst-case complexity is O(n^4), in practice it is much faster, allowing, for the first time, the computation of the hyperbolicity of graphs with up to 200,000 nodes. We experimentally show that our new algorithm drastically outperforms the best previously available algorithms, by analyzing a big dataset of real-world networks. Finally, we apply the new algorithm to compute the hyperbolicity of random graphs generated with the Erdös-Renyi model, the Chung-Lu model, and the Configuration Model
Centrality indices are widely used analytic measures for the importance of nodes in a network. Closeness centrality is very popular among these measures. For a single node v, it takes the sum of the distances of v to all other nodes into account. The currently best algorithms in practical applications for computing the closeness for all nodes exactly in unweighted graphs are based on breadth-first search (BFS) from every node. Thus, even for sparse graphs, these algorithms require quadratic running time in the worst case, which is prohibitive for large networks.In many relevant applications, however, it is unnecessary to compute closeness values for all nodes. Instead, one requires only the k nodes with the highest closeness values in descending order. Thus, we present a new algorithm for computing this top-k ranking in unweighted graphs. Following the rationale of previous work, our algorithm significantly reduces the number of traversed edges. It does so by computing upper bounds on the closeness and stopping the current BFS search when k nodes already have higher closeness than the bounds computed for the other nodes.In our experiments with real-world and synthetic instances of various types, one of these new bounds is good for small-world graphs with low diameter (such as social networks), while the other one excels for graphs with high diameter (such as road networks). Combining them yields an algorithm that is faster than the state of the art for top-k computations for all test instances, by a wide margin for high-diameter * E. B.'s and H. M.
We present KADABRA, a new algorithm to approximate betweenness centrality in directed and undirected graphs, which significantly outperforms all previous approaches on real-world complex networks. The efficiency of the new algorithm relies on two new theoretical contributions, of independent interest.The first contribution focuses on sampling shortest paths, a subroutine used by most algorithms that approximate betweenness centrality. We show that, on realistic random graph models, we can perform this task in time |E| 1 2 +o(1) with high probability, obtaining a significant speedup with respect to the Θ(|E|) worst-case performance. We experimentally show that this new technique achieves similar speedups on real-world complex networks, as well.The second contribution is a new rigorous application of the adaptive sampling technique. This approach decreases the total number of shortest paths that need to be sampled to compute all betweenness centralities with a given absolute error, and it also handles more general problems, such as computing the k most central nodes. Furthermore, our analysis is general, and it might be extended to other settings, as well. * This work was done while the authors were visiting the Simons Institute for the Theory of Computing. 1 As explained in see Section 2, to simplify notation we consider the normalized betweenness centrality.
International audienceIn this paper, we propose a new algorithm that computes the radius and the diameter of a weakly connected digraph G=(V,E), by finding bounds through heuristics and improving them until they are validated. Although the worst-case running time is O(|V||E|), we will experimentally show that it performs much better in the case of real-world networks, finding the radius and diameter values after 10–100 BFSs instead of |V| BFSs (independently of the value of |V|), and thus having running time O(|E|) in practice. As far as we know, this is the first algorithm able to compute the diameter of weakly connected digraphs, apart from the naive algorithm, which runs in time Ω(|V||E|) performing a BFS from each node. In the particular cases of strongly connected directed or connected undirected graphs, we will compare our algorithm with known approaches by performing experiments on a dataset composed by several real-world networks of different kinds. These experiments will show that, despite its generality, the new algorithm outperforms all previous methods, both in the radius and in the diameter computation, both in the directed and in the undirected case, both in average running time and in robustness. Finally, as an application example, we will use the new algorithm to determine the solvability over time of the “Six Degrees of Kevin Bacon” game, and of the “Six Degrees of Wikipedia” game. As a consequence, we will compute for the first time the exact value of the radius and the diameter of the whole Wikipedia digraph
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.