2020
DOI: 10.1007/s11590-020-01617-9
|View full text |Cite
|
Sign up to set email alerts
|

On the convergence rate of the Halpern-iteration

Abstract: In this work, we give a tight estimate of the rate of convergence for the Halpern-iteration for approximating a fixed point of a nonexpansive mapping in a Hilbert space. Specifically, using semidefinite programming and duality we prove that the norm of the residuals is upper bounded by the distance of the initial iterate to the closest fixed point divided by the number of iterations plus one.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
64
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 51 publications
(67 citation statements)
references
References 11 publications
3
64
0
Order By: Relevance
“…Recently, [6,14,17,36,39] found that Halpern-type [11] (or anchoring) methods yield a fast O(1/k 2 ) rate in terms of the squared gradient norm for minimax problems. [14,17] showed that the (implicit) Halpern iteration [11] with appropriately chosen step coefficients has an O(1/k 2 ) rate on the squared norm of a monotone F . Then, for a cocoercive F , an (explicit) version of the Halpern iteration was studied in [6,14] that has the same fast rate.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, [6,14,17,36,39] found that Halpern-type [11] (or anchoring) methods yield a fast O(1/k 2 ) rate in terms of the squared gradient norm for minimax problems. [14,17] showed that the (implicit) Halpern iteration [11] with appropriately chosen step coefficients has an O(1/k 2 ) rate on the squared norm of a monotone F . Then, for a cocoercive F , an (explicit) version of the Halpern iteration was studied in [6,14] that has the same fast rate.…”
Section: Introductionmentioning
confidence: 99%
“…The authors of [35] introduced tightness guarantees for smooth (strongly) convex optimization, and for larger classes of problems in [36] (where a list of sufficient conditions for applying the methodology is provided). It was also used to deal with nonsmooth problems [11,36], monotone inclusions and variational inequalities [16,17,21,31], and even to study fixed-point iterations of non-expansive operators [25]. Fixed-step gradient descent was among the first algorithms to be studied with this methodology in different settings: for (possibly composite) smooth (possibly strongly) convex optimization [14,15,35,36], and its line-search version was studied using the same methodology in [22].…”
mentioning
confidence: 99%
“…The performance estimation problem (PEP) is a computer-assisted proof methodology that analyzes the worst-case performance of optimization algorithms through semidefinite programs (Drori and Teboulle, 2014;Taylor et al, 2017b,a). The use of the PEP has lead to many discoveries that would have otherwise been difficult without the assistance (Kim and Fessler, 2018a;Taylor et al, 2018;Taylor and Bach, 2019;Barré et al, 2020;De Klerk et al, 2020;Gu and Yang, 2020;Lieder, 2021;Ryu et al, 2020;Dragomir et al, 2021;Kim, 2021;Yoon and Ryu, 2021). Notably, the algorithms OGM (Drori and Teboulle, 2014;Kim andFessler, 2016, 2018b), OGM-G (Kim andFessler, 2021), andITEM (Taylor and were obtained by using the PEP for the setup of minimizing a smooth convex (possibly strongly convex) function.…”
Section: Preliminaries and Notationsmentioning
confidence: 99%