Summary
A distributed multi‐agent convex optimization problem over a network with time‐varying connectivity is considered, where the agents are to collectively minimize a sum of nonsmooth but Lipschitz continuous functions subject to a convex constraint set. Based on Gaussian smoothing of the objective functions, a distributed projective random gradient‐free algorithm is considered for the constrained optimization problem, where each agent performs a local averaging operation, takes the one‐sided or two‐sided randomized gradient approximates instead of the subgradients to minimize its own objective function, and projects on the constraint set. The bounds on the limiting performance of the algorithm in mean are obtained, and it is proven that there is mean consensus between agents for diminishing step sizes. It is showed that with appropriately selected step sizes, the agent estimates generated by the algorithm converge to the same stationary point of the smoothed version of the problem with probability 1. Therefore, with sufficient small smoothing parameters, we obtain the result of consensus and convergence of the agent estimates to an optimal solution with probability 1. A numerical example is provided to justify the theoretical analysis. Copyright © 2016 John Wiley & Sons, Ltd.