2022
DOI: 10.48550/arxiv.2203.10973
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima

Abstract: Non-convex loss functions arise frequently in modern machine learning, and for the theoretical analysis of stochastic optimization methods, the presence of non-isolated minima presents a unique challenge that has remained under-explored. In this paper, we study the local convergence of the stochastic gradient descent method to non-isolated global minima. Under mild assumptions, we estimate the probability for the iterations to stay near the minima by adopting the notion of stochastic stability. After establish… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 15 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?