2013
DOI: 10.1016/j.automatica.2013.01.009
|View full text |Cite
|
Sign up to set email alerts
|

Accelerated gradient methods and dual decomposition in distributed model predictive control

Abstract: We propose a distributed optimization algorithm for mixed L1/L2-norm optimization based on accelerated gradient methods using dual decomposition. The algorithm achieves convergence rate O(1 k 2), where k is the iteration number, which significantly improves the convergence rates of existing duality-based distributed optimization algorithms that achieve O(1 k). The performance of the developed algorithm is evaluated on randomly generated optimization problems arising in distributed model predictive control (DMP… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
187
0
1

Year Published

2014
2014
2017
2017

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 223 publications
(189 citation statements)
references
References 15 publications
1
187
0
1
Order By: Relevance
“…Observe that the positivity of the slack variables does not need to be enforced. Indeed, it can be easily verified that inequality constraints of the form s i 0 included in (13) would never be strictly active, and can therefore be discarded.…”
Section: B Singularity Of the Dual Hessianmentioning
confidence: 99%
See 1 more Smart Citation
“…Observe that the positivity of the slack variables does not need to be enforced. Indeed, it can be easily verified that inequality constraints of the form s i 0 included in (13) would never be strictly active, and can therefore be discarded.…”
Section: B Singularity Of the Dual Hessianmentioning
confidence: 99%
“…the authors of [7] propose a coordinate ascent approach to solve constrained matrix problems. In [8], [9], [10], a gradient method, whereas in [11], [12], [13], a fast gradient method is used in order to attain dual optimality. All these methods make use of only first order derivatives to obtain a search direction and thus their theoretical and practical convergence cannot be faster than sublinear.…”
Section: Introductionmentioning
confidence: 99%
“…Primal decomposition refers to the decomposition of the original problem, while dual decomposition manipulates the dual formulation. Dual decomposition has been used in distributed MPC algorithms [23], [24], [25], [26] when the cost function is quadratic and separable.…”
Section: Comparison To Decomposition Algorithms A) Primal and Dualmentioning
confidence: 99%
“…The topic of this paper is to provide certificates for the execution time of the optimization algorithm such that for every feasible initial condition the optimization algorithm provides a solution within the sampling time. We consider linear time-invariant systems with polytopic constraints and quadratic cost and a dual accelerated gradient method [6] is used to solve the resulting optimization problem.…”
Section: Introductionmentioning
confidence: 99%
“…For accelerated gradient methods there are convergence rate results [13], [2], [17], [6] that depend explicitly on the norm of the difference between the optimal solution and the initial iterate. If this norm can be bounded, a bound on the number of iterations to achieve a prespecified accuracy of the function value can be computed.…”
Section: Introductionmentioning
confidence: 99%