In machine learning, stochastic gradient descent (SGD) is widely deployed to train models using highly non-convex objectives with equally complex noise models. Unfortunately, SGD theory often makes restrictive assumptions that fail to capture the non-convexity of real problems, and almost entirely ignore the complex noise models that exist in practice. In this work, we make substantial progress on this shortcoming. First, we establish that SGD's iterates will either globally converge to a stationary point or diverge under nearly arbitrary nonconvexity and noise models. Under a slightly more restrictive assumption on the joint behavior of the non-convexity and noise model that generalizes current assumptions in the literature, we show that the objective function cannot diverge, even if the iterates diverge. As a consequence of our results, SGD can be applied to a greater range of stochastic optimization problems with confidence about its global convergence behavior and stability.
Stochastic Gradient Descent (SGD) is a widely deployed optimization procedure throughout data-driven and simulation-driven disciplines, which has drawn a substantial interest in understanding its global behavior across a broad class of nonconvex problems and noise models. Recent analyses of SGD have made noteworthy progress in this direction, and these analyses have innovated important and insightful new strategies for understanding SGD. However, these analyses often have imposed certain restrictions (e.g., convexity, global Lipschitz continuity, uniform Hölder continuity, expected smoothness, etc.) that leave room for innovation. In this work, we address this gap by proving that, for a rather general class of nonconvex functions and noise models, SGD's iterates either diverge to infinity or converge to a stationary point with probability one. By further restricting to globally Hölder continuous functions and the expected smoothness noise model, we prove that-regardless of whether the iterates diverge or remain finite-the norm of the gradient function evaluated at SGD's iterates converges to zero with probability one and in expectation. As a result of our work, we broaden the scope of nonconvex problems and noise models to which SGD can be applied with rigorous guarantees of its global behavior.Preprint. Under review.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.