1997
DOI: 10.1023/a:1022602123316
|View full text |Cite
|
Sign up to set email alerts
|

Convergence Analysis of Perturbed Feasible Descent Methods

Abstract: Abstract. We develop a general approach to convergence analysis of feasible descent methods in the presence of perturbations. The important novel feature of our analysis is that perturbations need not tend to zero in the limit. In that case, standard convergence analysis techniques are not applicable. Therefore, a new approach is needed. We show that, in the presence of perturbations, a certain e-approximate solution can be obtained, where e depends linearly on the level of perturbations. Applications to the g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2003
2003
2016
2016

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 27 publications
0
13
0
Order By: Relevance
“…Even the reverse setting is generalized in the non-convex setting [5,11], namely where the backward-step is performed on a non-smooth non-convex function.As the amount of data to be processed is growing and algorithms are supposed to exploit all the data in each iteration, inexact methods become interesting, though we do not consider erroneous estimates in this paper. Forward-backward splitting schemes also seem to work for non-convex problems with erroneous estimates [44,43]. A mathematical analysis of inexact methods can be found, e.g., in [14,5], but with the restriction that the method is explicitly required to decrease the function values in each iteration.…”
mentioning
confidence: 99%
“…Even the reverse setting is generalized in the non-convex setting [5,11], namely where the backward-step is performed on a non-smooth non-convex function.As the amount of data to be processed is growing and algorithms are supposed to exploit all the data in each iteration, inexact methods become interesting, though we do not consider erroneous estimates in this paper. Forward-backward splitting schemes also seem to work for non-convex problems with erroneous estimates [44,43]. A mathematical analysis of inexact methods can be found, e.g., in [14,5], but with the restriction that the method is explicitly required to decrease the function values in each iteration.…”
mentioning
confidence: 99%
“…At the same time, we also prove that either (13) or (15) is true with Wolfe or Armijo or Goldstein linesearch on the uniformly convex function. In doing so, we remove various boundedness conditions such as boundedness from below of f (·), boundedness of x k , etc.…”
Section: Introductionmentioning
confidence: 61%
“…Then, assertion (16) follows from the optimality condition for (7). We proceed to prove the last assertion.…”
Section: Convergence Propertiesmentioning
confidence: 82%
“…where d k is computed using the solution of (14) [dual to (7), see Lemma 3.1]. Relations (4), (5) essentially constitute the usual stopping test in bundle methods; see e.g.…”
Section: Bundle Methods With Inexact Datamentioning
confidence: 99%