2021
DOI: 10.1002/asjc.2642
|View full text |Cite
|
Sign up to set email alerts
|

Optimal synchronization control for heterogeneous multi‐agent systems: Online adaptive learning solutions

Abstract: This paper presents an online adaptive learning solution to the optimal synchronization control problem of heterogeneous multi‐agent systems via a novel distributed policy iteration approach. For the leader‐follower multi‐agents, the dynamics of all the followers are heterogeneous with leader disturbance. To make the output of each follower synchronize with the leader's output, we propose a synchronization control protocol where the stability conditions for selecting the feedback gains are given. Next, with a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…Reinforcement learning (RL) technique [26], inspired by natural learning mechanisms, acquires learning information by receiving rewards (or feedback) for behaviors from the environment and is capable of developing the optimal control law without requiring knowledge of system dynamics [27]. Some RL-based results have been put forward for output synchronization of heterogeneous MASs [28][29][30][31][32][33][34]. Based on the Q-learning technique, model-free optimal control protocols with distributed observer were developed to achieve output synchronization for heterogeneous continuous-time (CT) [31] and discrete-time (DT) [29] MASs.…”
Section: Introductionmentioning
confidence: 99%
“…Reinforcement learning (RL) technique [26], inspired by natural learning mechanisms, acquires learning information by receiving rewards (or feedback) for behaviors from the environment and is capable of developing the optimal control law without requiring knowledge of system dynamics [27]. Some RL-based results have been put forward for output synchronization of heterogeneous MASs [28][29][30][31][32][33][34]. Based on the Q-learning technique, model-free optimal control protocols with distributed observer were developed to achieve output synchronization for heterogeneous continuous-time (CT) [31] and discrete-time (DT) [29] MASs.…”
Section: Introductionmentioning
confidence: 99%