2022
DOI: 10.1002/rnc.6266
|View full text |Cite
|
Sign up to set email alerts
|

A gradient‐free distributed optimization method for convex sum of nonconvex cost functions

Abstract: This article presents a special type of distributed optimization problems, where the summation of agents' local cost functions (i.e., global cost function) is convex, but each individual can be nonconvex. Unlike most distributed optimization algorithms by taking the advantages of gradient, the considered problem is allowed to be nonsmooth, and the gradient information is unknown to the agents. To solve the problem, a Gaussian‐smoothing technique is introduced and a gradient‐free method is proposed. We prove th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 47 publications
0
1
0
Order By: Relevance
“…We also refer to the papers 9,10,16,36,41 regrading the delay issue in the communication. We also refer to the distributed optimization problem on directed graphs 24,45 , the gradient-free distributed optimization algorithms 28,40 , and the fixed time optimization problems 19 .…”
Section: Introductionmentioning
confidence: 99%
“…We also refer to the papers 9,10,16,36,41 regrading the delay issue in the communication. We also refer to the distributed optimization problem on directed graphs 24,45 , the gradient-free distributed optimization algorithms 28,40 , and the fixed time optimization problems 19 .…”
Section: Introductionmentioning
confidence: 99%