2011
DOI: 10.1080/01630563.2011.590914
|View full text |Cite
|
Sign up to set email alerts
|

A Strongly Convergent Method for Nonsmooth Convex Minimization in Hilbert Spaces

Abstract: In this article, we propose a strongly convergent variant on the projected subgradient method for constrained convex minimization problems in Hilbert spaces. The advantage of the proposed method is that it converges strongly when the problem has solutions, without additional assumptions. The method also has the following desirable property: the sequence converges to the solution of the problem which lies closest to the initial iterate.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 20 publications
0
12
0
Order By: Relevance
“…This differs from the analysis of [10]; • provided that x 0 belongs to X , the proposed bundle algorithm solves the minimal norm solution problem associated to (1) without requiring either differentiability or Lipschitz continuity of f , in contrast to the methods given in [8].…”
Section: Introductionmentioning
confidence: 90%
See 1 more Smart Citation
“…This differs from the analysis of [10]; • provided that x 0 belongs to X , the proposed bundle algorithm solves the minimal norm solution problem associated to (1) without requiring either differentiability or Lipschitz continuity of f , in contrast to the methods given in [8].…”
Section: Introductionmentioning
confidence: 90%
“…For some equivalences with condition (6) (see for instance [12,Proposition 16.17]). Assumption (6) has been considered in the convergence analysis of many optimization methods in infinite-dimensional spaces (see for instance [4,10,13] and references therein). In [6], assumption (6) is also used, although this hypothesis is not explicitly stated in [6, Proposition 4.3].…”
Section: Notationmentioning
confidence: 99%
“…We emphasize that Assumption (A1) is a typical hypothesis for proving the convergence of the subgradient-scalar methods in infinite dimension setting; see [1,8,9,25]. As stated in [23], for the scalar and vector framework, this assumption holds trivially in finite-dimensional spaces.…”
Section: Assumptionsmentioning
confidence: 99%
“…The proposed conceptual algorithm has two variants called Algorithm R and Algorithm S. The first one is based on Robinson's subgradient algorithm given in [27] for solving problem (4). The S variant corresponds to a special modification of the subgradient algorithms proposed in [9] for the scalar problem (m = 1 and K = Ê + ) and in [10] for solving problem (4). The main difference between the proposed variants lies in how the projection step is done.…”
Section: Introductionmentioning
confidence: 99%
“…This kind of hyperplane has been used in some works, see [24,25]. The variants of the conceptual algorithm given in (12), (13) and (14) are called Algorithm 1, 2 and 3, respectively.…”
Section: Iterative Step 2: Setmentioning
confidence: 99%