2021
DOI: 10.48550/arxiv.2104.13868
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Communication Topology Co-Design in Graph Recurrent Neural Network Based Distributed Control

Abstract: When designing large-scale distributed controllers, the information sharing constraints between sub-controllers, as defined by a communication topology interconnecting them, are as important as the controller itself. Controllers implemented using dense topologies typically outperform those implemented using sparse topologies, but it is also desirable to minimize the cost of controller deployment. Motivated by the above, we introduce a compact but expressive graph recurrent neural network (GRNN) parameterizatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…In this subsection, we consider utilizing sparse weight matrices to enable distributed implementations of H-DNNs. Sparsity structures in neural networks can also be used to encode prior information on relations among elements when learning graph data [23] or to perform distributed control tasks [24], [25].…”
Section: Distributed Learning Through H-dnnsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this subsection, we consider utilizing sparse weight matrices to enable distributed implementations of H-DNNs. Sparsity structures in neural networks can also be used to encode prior information on relations among elements when learning graph data [23] or to perform distributed control tasks [24], [25].…”
Section: Distributed Learning Through H-dnnsmentioning
confidence: 99%
“…It is therefore important to develop large-scale DNN models for which the training can be distributed between physically separated end devices while guaranteeing satisfactory system-wide predictions. Furthermore, distributed DNN architectures enhance data privacy and fault tolerance [21], facilitate the learning from graph inputs [23] and enable the execution of distributed control tasks [24], [25].…”
Section: Introductionmentioning
confidence: 99%
“…Unfortunately, these assumptions do not hold for the vast majority of real-world large-scale systems. This fact motivates parametrizing the functions χ i (•), π i (•) as highly nonlinear deep neural networks, even when the dynamics (1) are linear (Gama and Sojoudi, 2021;Yang and Matni, 2021).…”
Section: Problem Statementmentioning
confidence: 99%
“…These limitations motivate going beyond linear control and suggest parametrizing highly nonlinear distributed policies through Deep Neural Networks (DNNs). Specifically, the recent works ; Khan et al (2020); Gama and Sojoudi (2021); Yang and Matni (2021) have focused on training Graph Neural Networks (GNNs) that parametrize static and dynamical distributed control policies. These methods achieve remarkable performance in applications such as vehicle flocking and formation flying.…”
Section: Introductionmentioning
confidence: 99%
“…Related work DNNs have shown promise in designing both static and dynamic distributed control policies for largescale systems. Notably, Graph Neural Networks (GNNs) have achieved impressive performance in applications like vehicle flocking and formation flying [24]- [27] thanks to their inherent scalable structure. However, guaranteeing stability with general GNNs remains challenging, often requiring restrictive assumptions like linear, open-loop stable system dynamics or sufficiently small Lipschitz constants [27].…”
Section: Introductionmentioning
confidence: 99%