Real-world networks, like social networks or the internet infrastructure, have structural properties such as large clustering coefficients that can best be described in terms of an underlying geometry. This is why the focus of the literature on theoretical models for realworld networks shifted from classic models without geometry, such as Chung-Lu random graphs, to modern geometry-based models, such as hyperbolic random graphs.With this paper we contribute to the theoretical analysis of these modern, more realistic random graph models. Instead of studying directly hyperbolic random graphs, we use a generalization that we call geometric inhomogeneous random graphs (GIRGs). Since we ignore constant factors in the edge probabilities, GIRGs are technically simpler (specifically, we avoid hyperbolic cosines), while preserving the qualitative behaviour of hyperbolic random graphs, and we suggest to replace hyperbolic random graphs by this new model in future theoretical studies.We prove the following fundamental structural and algorithmic results on GIRGs.(1) As our main contribution we provide a sampling algorithm that generates a random graph from our model in expected linear time, improving the best-known sampling algorithm for hyperbolic random graphs by a substantial factor O( √ n).(2) We establish that GIRGs have clustering coefficients in Ω(1), (3) we prove that GIRGs have small separators, i.e., it suffices to delete a sublinear number of edges to break the giant component into two large pieces, and (4) we show how to compress GIRGs using an expected linear number of bits. * We choose a toroidal ground space for the technical simplicity that comes with its symmetry and in order to obtain hyperbolic random graphs as a special case. The results of this paper stay true if T d is replaced, say, by the d-dimensional unitcube [0, 1] d . † A major difference between hyperbolic random graphs and our generalisation is that we ignore constant factors in the edge probabilities puv. This allows to greatly simplify the edge probability expressions, thus reducing the technical overhead. W [20,21]. Note that the term min{1, .} is necessary, as the product w u w v may be larger than W. Classically, the Θ simply hides a factor 1, but by ‡ We say that an event holds with high probability (whp) if it holds with probability 1 − n −ω(1) .
It is known that the (1+1)-EA with mutation rate c/n optimises every monotone function efficiently if c < 1, and needs exponential time on some monotone functions (HotTopic functions) if c ≥ 2.2. We study the same question for a large variety of algorithms, particularly for (1 + λ)-EA, (µ + 1)-EA, (µ + 1)-GA, their fast counterparts like fast (1 + 1)-EA, and for (1 + (λ, λ))-GA. We find that all considered mutation-based algorithms show a similar dichotomy for HotTopic functions, or even for all monotone functions. For the (1 + (λ, λ))-GA, this dichotomy is in the parameter cγ, which is the expected number of bit flips in an individual after mutation and crossover, neglecting selection. For the fast algorithms, the dichotomy is in m2/m1, where m1 and m2 are the first and second falling moment of the number of bit flips. Surprisingly, the range of efficient parameters is not affected by either population size µ nor by the offspring population size λ.The picture changes completely if crossover is allowed. The genetic algorithms (µ+1)-GA and (µ+1)-fGA are efficient for arbitrary mutations strengths if µ is large enough.
Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and other search heuristics and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering different classes of algorithms exist, each highlighting a different aspect of the algorithms under considerations. In this work we add to the existing black-box notions a new elitist black-box model, in which algorithms are required to base all decisions solely on (the relative performance of) a fixed number of the best search points sampled so far. Our elitist model thus combines features of the ranking-based and the memory-restricted black-box models with an enforced usage of truncation selection. We provide several examples for which the elitist black-box complexity is exponentially larger than that of the respective complexities in all previous black-box models, thus showing that the elitist black-box complexity can be much closer to the runtime of typical evolutionary algorithms. We also introduce the concept of p-Monte Carlo black-box complexity, which measures the time it takes to optimize a problem with failure probability at most p. Even for small p, the p-Monte Carlo black-box complexity of a function class [Formula: see text] can be smaller by an exponential factor than its typically regarded Las Vegas complexity (which measures the expected time it takes to optimize [Formula: see text]).
We present a tight analysis for the well-studied randomized 3-majority dynamics of stabilizing consensus, hence answering the main open question of Becchetti et al. [SODA'16]. Consider a distributed system of n nodes, each initially holding an opinion in {1, 2, . . . , k}. The system should converge to a setting where all (non-corrupted) nodes hold the same opinion. This consensus opinion should be valid, meaning that it should be among the initially supported opinions, and the (fast) convergence should happen even in the presence of a malicious adversary who can corrupt a bounded number of nodes per round and in particular modify their opinions. A well-studied distributed algorithm for this problem is the 3-majority dynamics, which works as follows: per round, each node gathers three opinions -say by taking its own and two of other nodes sampled at random -and then it sets its opinion equal to the majority of this set; ties are broken arbitrarily, e.g., towards the node's own opinion.Becchetti et al. [SODA'16] showed that the 3-majority dynamics converges to consensus in O((k 2 √ log n + k log n)(k + log n)) rounds, even in the presence of a limited adversary. We prove that, even with a stronger adversary, the convergence happens within O(k log n) rounds. This bound is known to be optimal.
One of the easiest randomized greedy optimization algorithms is the following evolutionary algorithm which aims at maximizing a boolean function f : {0, 1} n → R. The algorithm starts with a random search point ξ ∈ {0, 1} n , and in each round it flips each bit of ξ with probability c/n independently at random, where c > 0 is a fixed constant. The thus created offspring ξ ′ replaces ξ if and only if f (ξ ′ ) ≥ f (ξ). The analysis of the runtime of this simple algorithm for monotone and for linear functions turned out to be highly non-trivial. In this paper we review known results and provide new and self-contained proofs of partly stronger results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.