2022 30th European Signal Processing Conference (EUSIPCO) 2022
DOI: 10.23919/eusipco55093.2022.9909889
|View full text |Cite
|
Sign up to set email alerts
|

Flexible Formation Control Using Hausdorff Distance: A Multi-agent Reinforcement Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…1) Formation reward: In order to compute the maximum individual movement distance required for agents to form an ideal topology, the HD [10] is leveraged and a formation reward r (t) f can be obtained as…”
Section: B Reward Designmentioning
confidence: 99%
See 1 more Smart Citation
“…1) Formation reward: In order to compute the maximum individual movement distance required for agents to form an ideal topology, the HD [10] is leveraged and a formation reward r (t) f can be obtained as…”
Section: B Reward Designmentioning
confidence: 99%
“…In particular, instead of centralized location assignment, we adopt Hausdorff-Distance (HD) [10]-oriented multipolicy-distilled ConsMAC for adaptive formation, which is capable to adopt to agent quantity changes. • We verified the effectiveness and superiority of our framework through extensive simulations in the multi-agent particle environment [11] and the quadcopter-physicalmodel-based UAV simulation environment [12].…”
Section: Introductionmentioning
confidence: 99%