2021
DOI: 10.1109/tcyb.2019.2956291
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Heuristic Adaptive Neural Networks With Variance Reduction in Switching Graphs

Abstract: This article proposes a distributed adaptive training method for neural networks in switching communication graphs to deal with the problems concerned with massive data or privacy-related data. First, the stochastic variance reduced gradient (SVRG) is used for the training of neural networks. Then, the authors propose a heuristic adaptive consensus algorithm for distributed training, which adaptively adjusts the weighted connectivity matrix based on the performance of each agent over the communication graph. F… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0
5

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 30 publications
0
7
0
5
Order By: Relevance
“…Distributed training for multiple GANs over networks has not been well‐examined in the existing literature, and this motivates us to develop a framework based on the recent advances in distributed Nash equilibrium seeking by formulating a zero‐sum game between two adversarial networks of discriminators and generators. It is distinct from the extant results on distributed learning as in References [12–14,18,29,30], where only cooperation among agents exists, while learning for GANs inherently shows its nature of competition. In other words, the existing works 12–14,18 are intrinsically distributed optimisation problems (see References [31–35]) whereas training for GANs in this study is a distributed Nash equilibrium seeking problem with two coalitions (see References [21,23,28,36,37]).…”
Section: Introductionmentioning
confidence: 60%
“…Distributed training for multiple GANs over networks has not been well‐examined in the existing literature, and this motivates us to develop a framework based on the recent advances in distributed Nash equilibrium seeking by formulating a zero‐sum game between two adversarial networks of discriminators and generators. It is distinct from the extant results on distributed learning as in References [12–14,18,29,30], where only cooperation among agents exists, while learning for GANs inherently shows its nature of competition. In other words, the existing works 12–14,18 are intrinsically distributed optimisation problems (see References [31–35]) whereas training for GANs in this study is a distributed Nash equilibrium seeking problem with two coalitions (see References [21,23,28,36,37]).…”
Section: Introductionmentioning
confidence: 60%
“…Therefore, we will adopt the full distributed learning networks without any central node to avoid possible failures and communication latency. Detailed advantages and comparisons of the two types of networks have been well documented in recent studies [4,6].…”
Section: Problem Formulation and Algorithm Developmentmentioning
confidence: 99%
“…O PTIMISATION and learning over distributed networks have been widely studied in recent years, owing to their significant potentials in many biological, engineering, and social applications [1][2][3][4][5][6]. Several critical limitations of the centralised methods can be addressed by the distributed algorithms: first, communicational requirement is relieved as information exchanges are confined to adjacent neighbours; second, local datasets can be kept private and do not need to be revealed to remote fusion centres; third, computational burdens are distributed into a set of agents, where each of them only needs to process its local datasets.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations