2022
DOI: 10.1109/tac.2021.3075669
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 38 publications
(19 citation statements)
references
References 25 publications
0
19
0
Order By: Relevance
“…Finally, the algorithm can asymptotically converge to an optimal solution without quantization error. In this paper, we significantly improve our previous works [12,14,13] on distributed mirror descent methods in several aspects. To the best of our knowledge, this is the first work to propose the adaptive quantization method to address limited communication channel in mirror descent algorithm and simultaneously take delayed subgradient information into consideration in the study of distributed mirror descent method.…”
Section: Introductionmentioning
confidence: 97%
See 4 more Smart Citations
“…Finally, the algorithm can asymptotically converge to an optimal solution without quantization error. In this paper, we significantly improve our previous works [12,14,13] on distributed mirror descent methods in several aspects. To the best of our knowledge, this is the first work to propose the adaptive quantization method to address limited communication channel in mirror descent algorithm and simultaneously take delayed subgradient information into consideration in the study of distributed mirror descent method.…”
Section: Introductionmentioning
confidence: 97%
“…Recently, the distributed optimization algorithms for the network system have been studied widely (see in [1]- [14]). In the distributed optimization problem, there is no central coordination between different agents.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations