2013
DOI: 10.1002/oca.2100
|View full text |Cite
|
Sign up to set email alerts
|

Linear quadratic regulator control of multi‐agent systems

Abstract: SummaryThis paper considers a collection of agents performing a shared task making use of relative information communicated over an information network. The designed suboptimal controllers are state feedback and static output feedback, which are guaranteed to provide a certain level of performance in terms of a linear quadratic regulator (LQR) cost. Because of the convexity of the LQR performance region, the suboptimal LQR control problem with state feedback is reduced to the solution of two inequalities, with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
9
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 25 publications
0
9
0
Order By: Relevance
“…In the work of Zhang et al, the classical linear quadratic regulator (LQR) design method was employed in the cooperative control of linear multiagent system on directed graphs, and an LQR‐based control gain was proposed to make the agents reach synchronization. In the work of Zhang et al, the optimal LQR problem with state feedback is reduced to the solution of two inequalities, which is connected with the minimum and maximum eigenvalues of the Laplacian matrix. In the work of Dong, the centralized optimal control of multiagent system on all connected graph was proposed, and distributed optimal control on general directed graph was designed to approximate the centralized optimal control, such that the multiagent system can reach near‐optimal.…”
Section: Introductionmentioning
confidence: 99%
“…In the work of Zhang et al, the classical linear quadratic regulator (LQR) design method was employed in the cooperative control of linear multiagent system on directed graphs, and an LQR‐based control gain was proposed to make the agents reach synchronization. In the work of Zhang et al, the optimal LQR problem with state feedback is reduced to the solution of two inequalities, which is connected with the minimum and maximum eigenvalues of the Laplacian matrix. In the work of Dong, the centralized optimal control of multiagent system on all connected graph was proposed, and distributed optimal control on general directed graph was designed to approximate the centralized optimal control, such that the multiagent system can reach near‐optimal.…”
Section: Introductionmentioning
confidence: 99%
“…In [13][14][15], the objectives were to find the optimal estimated states for first-order multi-agent systems, but the control energy consumption was not considered. Differing from [13][14][15], [16][17][18][19] concentrated on the optimal protocols subject to the minimisation of certain cost functions. The optimal consensus problems were investigated in [16], but the analysis approach is only applied to a special case for first-order multi-agent systems.…”
Section: Introductionmentioning
confidence: 99%
“…The linear matrix inequality (LMI) criteria in [18] are dependent on the number of agents. Zhang et al [19] dealt with suboptimal problems for multi-agent systems, where only the stability problem was discussed. In summary, it is very difficult to achieve optimal consensus for multi-agent systems.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In [20], by using linear matrix inequality (LMI) tools, the idea of decomposing the state vector into two components was adopted for solving the optimal consensus problem, where the dimensions of all the variables of LMI criterions are dependent on the number of agents. Zhang et al [21] dealt with suboptimal linear quadratic regulator control for multi-agent systems in terms of LMIs, where only the stability problem was discussed, which can be regarded as a special consensus problem; that is, the consensus function was equal to zero.…”
Section: Introductionmentioning
confidence: 99%