2016 IEEE 55th Conference on Decision and Control (CDC) 2016
DOI: 10.1109/cdc.2016.7798401
|View full text |Cite
|
Sign up to set email alerts
|

Line search for averaged operator iteration

Abstract: Many popular first order algorithms for convex optimization, such as forward-backward splitting, Douglas-Rachford splitting, and the alternating direction method of multipliers (ADMM), can be formulated as averaged iteration of a nonexpansive mapping. In this paper we propose a line search for averaged iteration that preserves the theoretical convergence guarantee, while often accelerating practical convergence. We discuss several general cases in which the additional computational cost of the line search is m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
41
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 30 publications
(41 citation statements)
references
References 25 publications
0
41
0
Order By: Relevance
“…However, by ensuring that the residual r(x k+1 ) is smaller than r(x iLS+1 ), we can guarantee that it will eventually decrease. This is proven for general line search schemes in [11] and we state it for the projected line search below.…”
Section: Projected Line Searchmentioning
confidence: 82%
See 2 more Smart Citations
“…However, by ensuring that the residual r(x k+1 ) is smaller than r(x iLS+1 ), we can guarantee that it will eventually decrease. This is proven for general line search schemes in [11] and we state it for the projected line search below.…”
Section: Projected Line Searchmentioning
confidence: 82%
“…A method for applying line search on algorithms based on iterating averaged operators was recently proposed in [11]. The method was shown to often improve practical convergence.…”
Section: Line Searchmentioning
confidence: 99%
See 1 more Smart Citation
“…The convergence of proposed algorithm is proved in the following. As shown in [26], the algorithm is equivalent to…”
Section: Optimization Methodsmentioning
confidence: 99%
“…The convergence of proposed algorithm is proved in the following. As shown in , the algorithm is equivalent to right(x˜k,z˜k)left=argminx˜TQ0x˜+C0Tx˜+σ2x˜xkfalse‖22rightrightleft+ρ2z˜2Ξ(vk)vkfalse‖22rightxk+1left=xk+αx˜kxkrightvk+1left=vk+αz˜kΞ(vk) where zk=normalΞfalse(vkfalse),yk=ρ()vknormalΞfalse(vkfalse). We define the primal residual and dual residual of the problem as rightrprimleftCTxz,rightrightrdualleft2Q0x+C0+CTy. …”
Section: Optimal Energy Managementmentioning
confidence: 99%