In this paper we consider convergence rate problems for stochastic strongly-convex optimization in the non-Euclidean sense with a constraint set over a time-varying multi-agent network. We propose two efficient non-Euclidean stochastic subgradient descent algorithms based on the Bregman divergence as distance-measuring function rather than the Euclidean distances that were employed by the standard distributed stochastic projected subgradient algorithms. For distributed optimization of nonsmooth and strongly convex functions whose only stochastic subgradients are available, the first algorithm recovers the best previous known rate of O(ln(T )/T ) (where T is the total number of iterations). The second algorithm is an epoch variant of the first algorithm that attains the optimal convergence rate of O(1/T ), matching that of the best previously known centralized stochastic subgradient algorithm. Finally, we report some simulation results to illustrate the proposed algorithms.
In this paper, we study the distributed constrained optimization problem where the objective function is the sum of local convex cost functions of distributed nodes in a network, subject to a global inequality constraint. To solve this problem, we propose a consensus-based distributed regularized primal-dual subgradient method. In contrast to the existing methods, most of which require projecting the estimates onto the constraint set at every iteration, only one projection at the last iteration is needed for our proposed method. We establish the convergence of the method by showing that it achieves an O ( K (-1/4) ) convergence rate for general distributed constrained optimization, where K is the iteration counter. Finally, a numerical example is provided to validate the convergence of the propose method.
SUMMARYThis paper studies the problem of minimizing the sum of convex functions that all share a common global variable, each function is known by one specific agent in the network. The underlying network topology is modeled as a time-varying sequence of directed graphs, each of which is endowed with a nondoubly stochastic matrix. We present a distributed method that employs gradient-free oracles and push-sum algorithms for solving this optimization problem. We establish the convergence by showing that the method converges to an approximate solution at the expected rate of O.ln T = p T /, where T is the iteration counter. A numerical example is also given to illustrate the proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.