We consider distributed convex-concave saddle point problems over arbitrary connected undirected networks and propose a decentralized distributed algorithm for their solution. The local functions distributed across the nodes are assumed to have global and local groups of variables. For the proposed algorithm we prove non-asymptotic convergence rate estimates with explicit dependence on the network characteristics. To supplement the convergence rate analysis, we propose lower bounds for strongly-convex-strongly-concave and convex-concave saddle-point problems over arbitrary connected undirected networks. We illustrate the considered problem setting by a particular application to distributed calculation of non-regularized Wasserstein barycenters.
We study the problem of decentralized optimization over time-varying networks with strongly convex smooth cost functions. In our approach, nodes run a multi-step gossip procedure after making each gradient update, thus ensuring approximate consensus at each iteration, while the outer loop is based on accelerated Nesterov scheme. The algorithm achieves precision ε > 0 in O( √ κ g χ log 2 (1/ε))communication steps and O( √ κ g log(1/ε)) gradient computations at each node, where κ g is the global function number and χ characterizes connectivity of the communication network. In the case of a static network, χ = 1/γ where γ denotes the normalized spectral gap of communication matrix W. The complexity bound includes κ g , which can be significantly better than the worst-case condition number among the nodes.
We study the convergence rate of first-order optimization algorithms when the objective function can change from one iteration to another, but its minimizer and optimal value remain the same. This problem is motivated by recent developments in optimal distributed optimization algorithms over networks where computational nodes or agents can experience network malfunctions such as a loss of connection between two nodes. We show an explicit and non-asymptotic linear convergence of the distributed versions of the gradient descent and Nesterov's fast gradient method on strongly convex and smooth objective functions when the network of nodes has a finite number of changes (we will call this network slowly time-varying). Moreover, we show that Nesterov method reaches the optimal iteration complexity of Ω( √︀ · ( ) log 1 ) for decentralized algorithms, where and ( ) are condition numbers from the objective function and communication graphs respectively.Index Terms-distributed optimization, time-varying graph, accelerated method.Therefore, one seeks to solve (1) while taking into account the information constraints induced by the distributed nature of the data available.A. Rogozin
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.