2020 American Control Conference (ACC) 2020
DOI: 10.23919/acc45564.2020.9147395
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Non-convex Optimization of Multi-agent Systems Using Boosting Functions to Escape Local Optima

Abstract: We address the problem of multiple local optima arising in cooperative multi-agent optimization problems with non-convex objective functions. We propose a systematic approach to escape these local optima using the concept of boosting functions. The essence of boosting functions approach is to temporarily transform a gradient at a local optimum into a "boosted" non-zero gradient. Extending a prior centralized optimization approach, we develop a distributed framework for the use of boosted gradients (called a di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…In (25), sgn(•) represents the signum function and the subscript x is used to represent the x-component of a two dimensional vector. The second term in ( 25) is due to the linear shaped boundary segments of the sensing region V (s i ) formed due to the obstacle vertices v i j ∈ V (s i ).…”
Section: A Gradient Based Algorithm For Heterogeneous Multi-agent Cov...mentioning
confidence: 99%
See 4 more Smart Citations
“…In (25), sgn(•) represents the signum function and the subscript x is used to represent the x-component of a two dimensional vector. The second term in ( 25) is due to the linear shaped boundary segments of the sensing region V (s i ) formed due to the obstacle vertices v i j ∈ V (s i ).…”
Section: A Gradient Based Algorithm For Heterogeneous Multi-agent Cov...mentioning
confidence: 99%
“…Algorithm 2 is a Projected Gradient Ascent (PGA) algorithm for solving (9) which utilizes the gradients derived in (25) and (26). As seen in Algorithm 2, a gradient ascent update is first implemented in (29), where η…”
Section: A Gradient Based Algorithm For Heterogeneous Multi-agent Cov...mentioning
confidence: 99%
See 3 more Smart Citations