2015
DOI: 10.1016/j.automatica.2015.09.010
|View full text |Cite
|
Sign up to set email alerts
|

Metric selection in fast dual forward–backward splitting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
45
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 42 publications
(46 citation statements)
references
References 30 publications
1
45
0
Order By: Relevance
“…As computation times become smaller and smaller, overheads due to the runtime environment get more and more relevant in the total CPU time. A tailored, low-level implementation of the same algorithm could significantly decrease the CPU times shown in Table I: this is also reported in [50], where a speedup of more than a factor 20 is observed using C code generation.…”
Section: ) Aircraft Controlsupporting
confidence: 56%
See 1 more Smart Citation
“…As computation times become smaller and smaller, overheads due to the runtime environment get more and more relevant in the total CPU time. A tailored, low-level implementation of the same algorithm could significantly decrease the CPU times shown in Table I: this is also reported in [50], where a speedup of more than a factor 20 is observed using C code generation.…”
Section: ) Aircraft Controlsupporting
confidence: 56%
“…The dual problem has a condition number of 10 8 . To improve the convergence of the algorithms we therefore considered scaling the dual variables according to the Jacobi scaling, which consists of a diagonal change of variable (in the dual space) enforcing the (dual) Hessian to have diagonal elements equal to one (see also [50], [52] on the problem of preconditioning fast dual proximal gradient methods). Note that a diagonal change of variable in the dual space simply corresponds to a scaling of the equality constraints, when the problem is equivalently formulated as (P ).…”
Section: ) Aircraft Controlmentioning
confidence: 99%
“…This preconditioning is obtained by replacing the original minimization of f (x) + g(x) by minimization of f (Dq) + g(Dq), which gives rise to iterations involving the conjugate proximal maps D −1 F D (Dq), where F D is the proximal map for f as in (6) using the norm · (DD T ) −1 in place of the usual Euclidean metric. [15] includes some results about the rate of convergence as a function of D. In some cases, a larger value of ρ leads to faster convergence relative to ρ = 0.5. There are also results on convergence in the case that fixed ρ is replaced by a sequence of ρ k such that…”
Section: Anisotropic Preconditioned Mann Iteration For Nonexpansive Mapsmentioning
confidence: 99%
“…First-order methods are known to be sensitive to scaling and preconditioning can remarkably improve their convergence rate. Various preconditioning method such as [42], [43] have been proposed in the literature. Here, we employ a simple diagonal preconditioning which consists in computing a diagonal matrixH D with positive diagonal entries which approximates the dual Hessian H D and useH −1/2 D to scale the dual vector [44, 2.3.1].…”
Section: Preconditioning and Choice Of λmentioning
confidence: 99%