“…where the stochastic component F (x; ξ), indexed by random variable ξ, is possibly nonconvex and nonsmooth. We focus on tackling the problem with Lipschitz continuous objective, which arises in many popular applications including simulation optimization [17,34], deep neural networks [4,15,33,48], statistical learning [11,31,49,50,52], reinforcement learning [5,21,30,41], financial risk minimization [40] and supply chain management [10]. The Clarke subdifferential [6] for Lipschitz continuous function is a natural extension of gradient for smooth function and subdifferential for convex function.…”