1977
DOI: 10.1007/bf01584346
|View full text |Cite
|
Sign up to set email alerts
|

On convergence rates of subgradient optimization methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
108
0
4

Year Published

1988
1988
2020
2020

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 206 publications
(115 citation statements)
references
References 17 publications
3
108
0
4
Order By: Relevance
“…wheref is an estimation of the optimal value f * , and coefficient ρ satisfies 0 < ρ ≤ 2 or according to the convergent series method (see Shor [1968] and Goffin [1977]) with…”
Section: Computational Resultsmentioning
confidence: 99%
“…wheref is an estimation of the optimal value f * , and coefficient ρ satisfies 0 < ρ ≤ 2 or according to the convergent series method (see Shor [1968] and Goffin [1977]) with…”
Section: Computational Resultsmentioning
confidence: 99%
“…For a non-differentiable objective function like ours, first order subgradient algorithms are often employed. However, the required number of iterations to achieve -close solution is O(1/ 2 ) [9]. Direct application of subgradient algorithms render the optimization problem impractical.…”
Section: A First Order Methodsmentioning
confidence: 99%
“…Historically, the subgradient methods were the first numerical schemes for non-smooth convex minimization (see [11] and [7] for historical comments). Very soon it was proved that the efficiency estimate of these schemes is of the order 1) where is the desired absolute accuracy of the approximate solution in function value (see also [3]). …”
Section: Introductionmentioning
confidence: 99%