“…For another example, in simulation-based optimization, the gradient estimation often incurs noise that can be due to various sources, such as modeling and discretization errors, incomplete convergence, and finite sample size for Monte-Carlo methods [22]. Distributed algorithms dealing with problem (1) have been studied extensively in the literature [56,36,37,28,19,20,52,13,46,34,45]. Recently, there has been considerable interest in distributed implementation of stochastic gradient algorithms [48,54,14,3,5,55,6,9,10,7,32,24,26,40,51,41,18].…”