a b s t r a c tElection is a classical paradigm in distributed algorithms. This paper aims to design and analyze a distributed algorithm choosing a node in a graph which models a network. In case the graph is a tree, a simple schema of algorithm acts as follows: it removes leaves until the graph is reduced to a single vertex; the elected one. In Métivier et al. (2003) [7], the authors studied a randomized variant of this schema which gives the same probability of being elected to each node of the tree. They conjectured that the expected election duration of this algorithm is O(ln(n)) where n denotes the size of the tree, and asked whether it is possible to use the same algorithm to obtain a fair election in other classes of graphs.In this paper, we prove their conjecture. We then introduce a new structure called polyominoid graphs. We show how a spanning tree for these graphs can be computed locally so that our algorithm, applied to this spanning tree, gives a uniform election algorithm on polyominoids.
Predicting electricity power is an important task, which helps power utilities in improving their systems’ performance in terms of effectiveness, productivity, management and control. Several researches had introduced this task using three main models: engineering, statistical and artificial intelligence. Based on the experiments, which used artificial intelligence models, multilayer neural networks model has proven its success in predicting many evaluation datasets. However, the performance of this model depends mainly on the type of activation function. Therefore, this paper introduces an experimental study for investigating the performance of the multilayer neural networks model with respect to different activation functions and different depths of hidden layers. The experiments in this paper cover the comparison among eleven activation functions using four benchmark electricity datasets. The activation functions under examination are sigmoid, hyperbolic tangent, SoftSign, SoftPlus, ReLU, Leak ReLU, Gaussian, ELU, SELU, Swish and Adjust-Swish. Experimental results show that ReLU and Leak ReLU activation functions outperform their counterparts in all datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.