2017 American Control Conference (ACC) 2017
DOI: 10.23919/acc.2017.7963671
|View full text |Cite
|
Sign up to set email alerts
|

Line search for generalized alternating projections

Abstract: This paper is about line search for the generalized alternating projections (GAP) method. This method is a generalization of the von Neumann alternating projections method, where instead of performing alternating projections, relaxed projections are alternated. The method can be interpreted as an averaged iteration of a nonexpansive mapping. Therefore, a recently proposed line search method for such algorithms is applicable to GAP. We evaluate this line search and show situations when the line search can be pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(14 citation statements)
references
References 18 publications
0
14
0
Order By: Relevance
“…Thus α k+1 ∈ (0, 2) and each iteration is the result of an averaged mapping S k with fixed points U ∩ V. It follows that the iterates converge to the fixed point set U ∩ V, see e.g. [25].…”
Section: Adaptive Generalized Alternating Projectionsmentioning
confidence: 91%
See 2 more Smart Citations
“…Thus α k+1 ∈ (0, 2) and each iteration is the result of an averaged mapping S k with fixed points U ∩ V. It follows that the iterates converge to the fixed point set U ∩ V, see e.g. [25].…”
Section: Adaptive Generalized Alternating Projectionsmentioning
confidence: 91%
“…Let the relaxed projection onto a set C, with relaxation parameter α, be defined as P α C := (1 − α)I + αP C . The generalized alternating projections (GAP) [25] for two closed, convex and nonempty sets U and V, with U ∩ V = ∅, is then defined by the iteration…”
Section: Optimal Parameters For Gapmentioning
confidence: 99%
See 1 more Smart Citation
“…We remark that the current implementation of our algorithms is sequential, but many steps can be carried out in parallel, so further computational gains may be achieved by taking full advantage of distributed computing architectures. Besides, it would be interesting to integrate some acceleration techniques (e.g., [15,41]) that promise to improve the convergence performance of ADMM in practice.…”
Section: Resultsmentioning
confidence: 99%
“…, λ p }. According to (4), each iteration of the ADMM requires the minimization of the Lagrangian in (15) with respect to the X -and Y-blocks separately, followed by an update of the multipliers Z. At each step, the variables not being optimized over are fixed to their most current value.…”
Section: Admm For the Domain-space Decompositionmentioning
confidence: 99%