In this work we address the problem of distributed optimization of the sum of convex cost functions in the context of multi-agent systems over lossy communication networks. Building upon operator theory, first, we derive an ADMMlike algorithm that we refer to as relaxed ADMM (R-ADMM) via a generalized Peaceman-Rachford Splitting operator on the Lagrange dual formulation of the original optimization problem. This specific algorithm depends on two parameters, namely the averaging coefficient α and the augmented Lagrangian coefficient ρ. We show that by setting α = 1/2 we recover the standard ADMM algorithm as a special case of our algorithm. Moreover, by properly manipulating the proposed R-ADMM, we are able to provide two alternative ADMM-like algorithms that present easier implementation and reduced complexity in terms of memory, communication and computational requirements. Most importantly the latter of these two algorithms provides the first ADMM-like algorithm which has guaranteed convergence even in the presence of lossy communication under the same assumption of standard ADMM with lossless communication. Finally, this work is complemented with a set of compelling numerical simulations of the proposed algorithms over cycle graphs and random geometric graphs subject to i.i.d. random packet losses.Index Terms-distributed optimization, ADMM, operator theory, splitting methods, Peaceman-Rachford operator arXiv:1809.09887v1 [math.OC]
This paper provides a unified stochastic operator framework to analyze the convergence of iterative optimization algorithms for both static problems and online optimization and learning. In particular, the framework is well suited for algorithms that are implemented in an inexact or stochastic fashion because of (i) stochastic errors emerging in algorithmic steps, and because (ii) the algorithm may feature random coordinate updates. To this end, the paper focuses on separable operators of the form T x = (T 1 x, . . . , Tnx), defined over the direct sum of possibly infinite-dimensional Hilbert spaces, and investigates the convergence of the associated stochastic Banach-Picard iteration. Results in terms of convergence in mean and in high-probability are presented when the errors affecting the operator follow a sub-Weibull distribution and when updates T i x are performed based on a Bernoulli random variable. In particular, the results are derived for the cases where T is contractive and averaged in terms of convergence to the unique fixed point and cumulative fixed-point residual, respectively. The results do not assume vanishing errors or vanishing parameters of the operator, as typical in the literature (this case is subsumed by the proposed framework), and links with exiting results in terms of almost sure convergence are provided. In the online optimization context, the operator changes at each iteration to reflect changes in the underlying optimization problem. This leads to an online Banach-Picard iteration, and similar results are derived where the bounds for the convergence in mean and high-probability further depend on the evolution of fixed points (i.e., optimal solutions of the time-varying optimization problem).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.