2022
DOI: 10.48550/arxiv.2202.13744
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Subgradient sampling for nonsmooth nonconvex minimization

Abstract: Risk minimization for nonsmooth nonconvex problems naturally leads to firstorder sampling or, by an abuse of terminology, to stochastic subgradient descent. We establish the convergence of this method in the path-differentiable case, and describe more precise results under additional geometric assumptions. We recover and improve results from Ermoliev-Norkin [27] by using a different approach: conservative calculus and the ODE method. In the definable case, we show that first-order subgradient sampling avoids a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 25 publications
(57 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?