This paper derives a distributed Kalman filter to estimate a sparsely connected, large-scale, n−dimensional, dynamical system monitored by a network of N sensors. Local Kalman filters are implemented on the (n l −dimensional, where n l n) sub-systems that are obtained after spatially decomposing the large-scale system. The resulting subsystems overlap, which along with an assimilation procedure on the local Kalman filters, preserve an Lth order Gauss-Markovian structure of the centralized error processes. The information loss due to the Lth order GaussMarkovian approximation is controllable as it can be characterized by a divergence that decreases as L ↑. The order of the approximation, L, leads to a lower bound on the dimension of the sub-systems, hence, providing a criterion for sub-system selection. The assimilation procedure is carried out on the local error covariances with a distributed iterate collapse inversion (DICI) algorithm that we introduce. The DICI algorithm computes the (approximated) centralizedRiccati and Lyapunov equations iteratively with only local communication and low-order computation. We fuse the observations that are common among the local Kalman filters using bipartite fusion graphs and consensus averaging algorithms. The proposed algorithm achieves full distribution of the Kalman filter that is coherent with the centralized Kalman filter with an Lth order Gaussian-Markovian structure on the centralized error processes. Nowhere storage, communication, or computation of n−dimensional vectors and matrices is needed; only n l n dimensional vectors and matrices are communicated or used in the computation at the sensors.
In this letter, we study distributed optimization, where a network of agents, abstracted as a directed graph, collaborates to minimize the average of locally-known convex functions. Most of the existing approaches over directed graphs are based on push-sum (type) techniques, which use an independent algorithm to asymptotically learn either the left or right eigenvector of the underlying weight matrices.This strategy causes additional computation, communication, and nonlinearity in the algorithm. In contrast, we propose a linear algorithm based on an inexact gradient method and a gradient estimation technique. Under the assumptions that each local function is strongly-convex with Lipschitz-continuous gradients, we show that the proposed algorithm geometrically converges to the global minimizer with a sufficiently small step-size. We present simulations to illustrate the theoretical findings.
Abstract-In this paper, we consider distributed optimization problems where the goal is to minimize a sum of objective functions over a multi-agent network. We focus on the case when the inter-agent communication is described by a strongly-connected, directed graph. The proposed algorithm, ADD-OPT (Accelerated Distributed Directed Optimization), achieves the best known convergence rate for this class of problems, O(µ k ), 0 < µ < 1, given strongly-convex, objective functions with globally Lipschitzcontinuous gradients, where k is the number of iterations. Moreover, ADD-OPT supports a wider and more realistic range of step-sizes in contrast to existing work. In particular, we show that ADD-OPT converges for arbitrarily small (positive) stepsizes. Simulations further illustrate our results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.