2021
DOI: 10.1049/cth2.12127
|View full text |Cite
|
Sign up to set email alerts
|

Distributed gradient descent method with edge‐based event‐driven communication for non‐convex optimization

Abstract: This paper considers an event‐driven distributed non‐convex optimization algorithm for a multi‐agent system, where each agent has a non‐convex cost function. The goal of the multi‐agent system is to minimize the global objective function, which is the sum of these local cost functions, in a distributed manner. To this end, each agent updates the own state by a consensus‐based gradient descent algorithm. The local information exchange among neighbor agents is carried out with an event‐triggered scheme to achiev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…e method can be divided into three categories, namely batch gradient descent, mini-batch gradient descent, and stochastic gradient descent [22].…”
Section: Head Chunk Track Chunkmentioning
confidence: 99%
See 1 more Smart Citation
“…e method can be divided into three categories, namely batch gradient descent, mini-batch gradient descent, and stochastic gradient descent [22].…”
Section: Head Chunk Track Chunkmentioning
confidence: 99%
“…By using this algorithm, it is relatively easy to obtain the optimal solution for the function that needs to be trained, thereby improving the accuracy of the model. The method can be divided into three categories, namely batch gradient descent, mini-batch gradient descent, and stochastic gradient descent [ 22 ].…”
Section: Introduction Of Related Algorithms and Model Establishment A...mentioning
confidence: 99%
“…On the other hand, the event-driven communication policy sends an estimated optimal solution to a neighboring provider only when an error of the estimated solution exceeds a certain threshold [28,29]. A distributed optimization algorithm based on event-driven communication has been investigated to reduce the number of communications between agents and effectively use network resources [30][31][32][33][34][35][36].…”
Section: Related Workmentioning
confidence: 99%
“…A quadratic optimization problem considered in [23] and a continuous-time algorithm with event-triggered communication is proposed to solve it. In [24], nonconvex optimization problems are addressed by implementing an event-triggered communication on the scheme of the distributed gradient descent method. The existing discrete-time algorithm from which we take inspiration is the so-called gradient tracking algorithm.…”
Section: Introductionmentioning
confidence: 99%