2022
DOI: 10.1109/tnnls.2021.3071058
|View full text |Cite
|
Sign up to set email alerts
|

Consensus-Based Cooperative Algorithms for Training Over Distributed Data Sets Using Stochastic Gradients

Abstract: In this paper, distributed algorithms are proposed for training a group of neural networks with private datasets. Stochastic gradients are utilised in order to eliminate the requirement for true gradients. To obtain a universal model of the distributed neural networks trained using local datasets only, consensus tools are introduced to derive the model towards the optimum. Most of the existing works employ diminishing learning rates, which are often slow and impracticable for online learning, while constant le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 46 publications
0
7
0
Order By: Relevance
“…By summing (26) over i from 0 to n − 1, we obtain an upper bound of the forward deviation V t by Lemma IV.1 that…”
Section: If the Function Fmentioning
confidence: 99%
See 2 more Smart Citations
“…By summing (26) over i from 0 to n − 1, we obtain an upper bound of the forward deviation V t by Lemma IV.1 that…”
Section: If the Function Fmentioning
confidence: 99%
“…The potential bottleneck of the starshaped network lies on the communication traffic jam on the central sever and the performance will be significantly degraded when the network bandwidth is low. To consider a more general distributed network topology without a central server, many distributed stochastic gradient algorithms have been studied [21]- [26] for convex finite-sum optimization. Considering the network communication, some works have studied synchronous and asynchronous distributed stochastic algorithms with diminishing step-sizes over multi-agent networks [22]- [24].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, techniques have also been investigated to process data in parallel with multiple local models committing updates simultaneously. 30 Consensus-based parallel learning algorithms where multiple training representations are learned in parallel, and information of locally trained parameters are then exchanged to build consensus are examples of such applications. 31 As such, both data-driven domains as well as agent-based domains use parallel and incremental architectures with similar interests of distributing the learning process to overcome the limitations associated with centralised learning.…”
Section: Abstraction Learning Architecturesmentioning
confidence: 99%
“…The stochastic gradient algorithm is widely applied in the area of adaptive control, and also has deep connections with stochastic gradient descent algorithm and its variants which are widely used to deal with optimization problems in machine learning [23]. With the development of sensor networks, the distributed implementations of stochastic gradient algorithms have attracted much attention of researchers (cf., [24]- [29]). For example, George et al in [28] proposed a distributed stochastic gradient descent algorithm for solving non-convex optimization problems with applications in distributed super-vised learning.…”
Section: Introductionmentioning
confidence: 99%