2010
DOI: 10.1016/j.automatica.2010.08.011
|View full text |Cite
|
Sign up to set email alerts
|

Stability of primal–dual gradient dynamics and applications to network optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

5
291
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 318 publications
(296 citation statements)
references
References 11 publications
5
291
0
Order By: Relevance
“…One way of tackling this problem, see e.g., [8], is to interpret the dynamics as a state-dependent switched system, formulate the latter as a hybrid automaton as defined in [15], and then use the invariance principle for hybrid automata to characterize its asymptotic convergence properties. However, this route is not valid in general because one of the key assumptions required by the invariance principle for hybrid automata is not satisfied by the primal-dual dynamics.…”
Section: Problem Statementmentioning
confidence: 99%
See 3 more Smart Citations
“…One way of tackling this problem, see e.g., [8], is to interpret the dynamics as a state-dependent switched system, formulate the latter as a hybrid automaton as defined in [15], and then use the invariance principle for hybrid automata to characterize its asymptotic convergence properties. However, this route is not valid in general because one of the key assumptions required by the invariance principle for hybrid automata is not satisfied by the primal-dual dynamics.…”
Section: Problem Statementmentioning
confidence: 99%
“…The first contribution of this paper is an example that illustrates this point. Our second contribution is an alternative proof strategy that arrives at the same convergence results of [8]. We consider an inequality constrained concave optimization problem described by continuously differentiable functions with locally Lipschitz gradients.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Most of the available algorithms, such as the widely used distributed algorithms based on subgradient [14] and projected subgradient [15], are developed in discrete-time mainly due to the overwhelming ability of digital computers to execute the algorithms discretely. Recently, more and more distributed convex optimization algorithms are explored in continuous-time since continuous-time set up is favored for utilizing more techniques (the elegant Lyapunov argument in [4] for example) to prove the algorithm convergence, and is beneficial for adopting differential geometry viewpoint which is extremely powerful when the optimization is constrained (see for example [21]). …”
Section: Introductionmentioning
confidence: 99%