We consider self-stabilizing algorithms to compute a Maximal Independent Set (MIS) in the extremely weak beeping communication model. The model consists of an anonymous network with synchronous rounds. In each round, each vertex can optionally transmit a signal to all its neighbors (beep). After the transmission of a signal, each vertex can only differentiate between no signal received, or at least one signal received. We assume that vertices have some knowledge about the topology of the network.We revisit the not self-stabilizing algorithm proposed by Jeavons, Scott, and Xu ( 2013), which computes an MIS in the beeping model. We enhance this algorithm to be selfstabilizing, and explore two different variants, which differ in the knowledge about the topology available to the vertices. In the first variant, every vertex knows an upper bound on the maximum degree ∆ of the graph. For this case, we prove that the proposed self-stabilizing version maintains the same run-time as the original algorithm, i.e. it stabilizes after O(log n) rounds w.h.p. on any n-vertex graph. In the second variant, each vertex only knows an upper bound on its own degree. For this case, we prove that the algorithm stabilizes after O(log n • log log n) rounds on any n-vertex graph, w.h.p.
We study parallel Load Balancing protocols for the client-server distributed model defined as follows. There is a set C of n clients and a set S of n servers where each client has (at most) a constant number d 1 of requests that must be assigned to some server. The client set and the server one are connected to each other via a fixed bipartite graph: the requests of client v can only be sent to the servers in its neighborhood N(v). The goal is to assign every client request so as to minimize the maximum load of the servers. In this setting, efficient parallel protocols are available only for dense topologies. In particular, a simple protocol, named raes, has been recently introduced by Becchetti et al. [1] for regular dense bipartite graphs. They show that this symmetric, non-adaptive protocol achieves constant maximum load with parallel completion time O(log n) and overall work O(n), w.h.p.Motivated by proximity constraints arising in some client-server systems, we analyze raes over almost-regular bipartite graphs where nodes may have neighborhoods of small size. In detail, we prove that, w.h.p., the raes protocol keeps the same performances as above (in terms of maximum load, completion time, and work complexity, respectively) on any almost-regular bipartite graph with degree (log 2 n).Our analysis significantly departs from that in [1] since it requires to cope with non-trivial stochastic-dependence issues on the random choices of the algorithmic process which are due to the worst-case, sparse topology of the underlying graph.
We study parallel Load Balancing protocols for the client-server distributed model defined as follows. There is a set C of n clients and a set S of n servers where each client has (at most) a constant number d 1 of requests that must be assigned to some server. The client set and the server one are connected to each other via a fixed bipartite graph: the requests of client v can only be sent to the servers in its neighborhood N(v). The goal is to assign every client request so as to minimize the maximum load of the servers. In this setting, efficient parallel protocols are available only for dense topologies. In particular, a simple protocol, named raes, has been recently introduced by Becchetti et al. [1] for regular dense bipartite graphs. They show that this symmetric, non-adaptive protocol achieves constant maximum load with parallel completion time O(log n) and overall work O(n), w.h.p.Motivated by proximity constraints arising in some client-server systems, we analyze raes over almost-regular bipartite graphs where nodes may have neighborhoods of small size. In detail, we prove that, w.h.p., the raes protocol keeps the same performances as above (in terms of maximum load, completion time, and work complexity, respectively) on any almost-regular bipartite graph with degree (log 2 n).Our analysis significantly departs from that in [1] since it requires to cope with non-trivial stochastic-dependence issues on the random choices of the algorithmic process which are due to the worst-case, sparse topology of the underlying graph.
We study expansion and information diffusion in dynamic networks, that is in networks in which nodes and edges are continuously created and destroyed. We consider information diffusion by flooding, the process by which, once a node is informed, it broadcasts its information to all its neighbors.We study models in which the network is sparse, meaning that it has O(n) edges, where n is the number of nodes, and in which edges are created randomly, rather than according to a carefully designed distributed algorithm. In our models, when a node is "born", it connects to d = O(1) random other nodes. An edge remains alive as long as both its endpoints do.If no further edge creation takes place, we show that, although the network will have Ω d (n) isolated nodes, it is possible, with large constant probability, to inform a 1 − exp(−Ω(d)) fraction of nodes in O(log n) time. Furthermore, the graph exhibits, at any given time, a "large-set expansion" property.We also consider models with edge regeneration, in which if an edge (v, w) chosen by v at birth goes down because of the death of w, the edge is replaced by a fresh random edge (v, z). In models with edge regeneration, we prove that the network is, with high probability, a vertex expander at any given time, and flooding takes O(log n) time.The above results hold both for a simple but artificial streaming model of node churn, in which at each time step one node is born and the oldest node dies, and in a more realistic continuoustime model in which the time between births is Poisson and the lifetime of each node follows an exponential distribution.Previous work on expansion and flooding studied models in which either the vertex set is fixed and only edges change with time or models in which edge generation occurs according to an algorithm. Our motivation for studying models with random edge generation is to go in the direction of models that may eventually capture the formation of social networks or peer-to-peer networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.