2017 11th Asian Control Conference (ASCC) 2017
DOI: 10.1109/ascc.2017.8287157
|View full text |Cite
|
Sign up to set email alerts
|

A distributed optimization method with unknown cost function in a multi-agent system via randomized gradient-free method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
24
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3

Relationship

4
3

Authors

Journals

citations
Cited by 19 publications
(24 citation statements)
references
References 20 publications
0
24
0
Order By: Relevance
“…In this paper, we consider the online optimization problem where the gradient information is not available, but the value of the objective functions can be measured, and is only revealed after the decision is made at each time-step. Motivated by the work in [32] and our previous work in [31], we propose an online randomized gradient-free distributed projected gradient descent (oRGF-DPGD) algorithm, in which a randomized gradient-free oracle is built locally as a replacement of the local function derivative, followed by the update of the state variables at each time-step. With some standard assumptions on the graph connectivity and the local objective functions, we are able to prove that the dynamic regret is bounded by a small error term plus a product of a term depending on the variation of the optimal solution sequence and a sublinear function of the time duration.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we consider the online optimization problem where the gradient information is not available, but the value of the objective functions can be measured, and is only revealed after the decision is made at each time-step. Motivated by the work in [32] and our previous work in [31], we propose an online randomized gradient-free distributed projected gradient descent (oRGF-DPGD) algorithm, in which a randomized gradient-free oracle is built locally as a replacement of the local function derivative, followed by the update of the state variables at each time-step. With some standard assumptions on the graph connectivity and the local objective functions, we are able to prove that the dynamic regret is bounded by a small error term plus a product of a term depending on the variation of the optimal solution sequence and a sublinear function of the time duration.…”
Section: Introductionmentioning
confidence: 99%
“…(b) We design a class of distributed optimization algorithms by a way to combining conventional KW ideas and consensus based algorithms, which is a different technique compared to other existing gradient/subgradient-free algorithms c.f., [23]- [27]. We prove the consensus of estimates and achievement of the global minimization with probability one.…”
mentioning
confidence: 99%
“…medicine [51], and earth sciences [52]. Recent research on gradient-free schemes has been reported in [53][54][55][56][57][58][59] based on extremum seeking, and [60][61][62][63][64][65][66][67][68][69][70] This type of methods, in general, is a continuous-time control-based approach, and usually presumes some smoothness in the cost function. On the other hand, the smoothing-based methods estimate the gradient of the cost function based on two point values.…”
Section: Literature Reviewmentioning
confidence: 99%
“…As reviewed in Chapter 1, recent studies on this topic have been reported in [60][61][62][63][64][65][66][67].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation