Adaptive networks consist of a collection of nodes with adaptation and learning abilities. The nodes interact with each other on a local level and diffuse information across the network to solve estimation and inference tasks in a distributed manner. In this work, we compare the mean-square performance of two main strategies for distributed estimation over networks: consensus strategies and diffusion strategies.The analysis in the paper confirms that under constant step-sizes, diffusion strategies allow information to diffuse more thoroughly through the network and this property has a favorable effect on the evolution of the network: diffusion networks are shown to converge faster and reach lower mean-square deviation than consensus networks, and their mean-square stability is insensitive to the choice of the combination weights. In contrast, and surprisingly, it is shown that consensus networks can become unstable even if all the individual nodes are stable and able to solve the estimation task on their own. When this occurs, cooperation over the network leads to a catastrophic failure of the estimation task. This phenomenon does not occur for diffusion networks: we show that stability of the individual nodes always ensures stability of the diffusion network irrespective of the combination topology. Simulation results support the theoretical findings.
Adaptive networks rely on in-network and collaborative processing among distributed agents to deliver enhanced performance in estimation and inference tasks. Information is exchanged among the nodes, usually over noisy links. The combination weights that are used by the nodes to fuse information from their neighbors play a critical role in influencing the adaptation and tracking abilities of the network. This paper first investigates the mean-square performance of general adaptive diffusion algorithms in the presence of various sources of imperfect information exchanges, quantization errors, and model nonstationarities. Among other results, the analysis reveals that link noise over the regression data modifies the dynamics of the network evolution in a distinct way, and leads to biased estimates in steady-state.The analysis also reveals how the network mean-square performance is dependent on the combination weights. We use these observations to show how the combination weights can be optimized and adapted.Simulation results illustrate the theoretical findings and match well with theory.
In distributed processing, agents generally collect data generated by the same underlying unknown model (represented by a vector of parameters) and then solve an estimation or inference task cooperatively.In this paper, we consider the situation in which the data observed by the agents may have risen from two different models. Agents do not know beforehand which model accounts for their data and the data of their neighbors. The objective for the network is for all agents to reach agreement on which model to track and to estimate this model cooperatively. In these situations, where agents are subject to data from unknown different sources, conventional distributed estimation strategies would lead to biased estimates relative to any of the underlying models. We first show how to modify existing strategies to guarantee unbiasedness. We then develop a classification scheme for the agents to identify the models that generated the data, and propose a procedure by which the entire network can be made to converge towards the same model through a collaborative decision-making process. The resulting algorithm is applied to model fish foraging behavior in the presence of two food sources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.