2021
DOI: 10.1109/tac.2020.3045094
|View full text |Cite
|
Sign up to set email alerts
|

Continuous-Time Discounted Mirror Descent Dynamics in Monotone Concave Games

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
46
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(47 citation statements)
references
References 30 publications
1
46
0
Order By: Relevance
“…Moreover, g i in the coupled constraints may not required to be affine, which is more general than the constraints in previous works [10], [28]. Also, the problem setting does not require strongly or strictly convexity for either cost functions f i or constraint functions g i [10], [27], and the selection qualification for generating function φ i has also been widely used [14], [19], [22].…”
Section: Formulation and Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, g i in the coupled constraints may not required to be affine, which is more general than the constraints in previous works [10], [28]. Also, the problem setting does not require strongly or strictly convexity for either cost functions f i or constraint functions g i [10], [27], and the selection qualification for generating function φ i has also been widely used [14], [19], [22].…”
Section: Formulation and Algorithmmentioning
confidence: 99%
“…In recent years, continuous-time MD-based algorithms have also attracted much attention. For example, [14] proposed the acceleration of a continuous-time MD algorithm, and afterward, [21] showed continuous-time stochastic MD for strongly convex functions, while [22] proposed a discounted continuous-time MD dynamics to approximate the exact solution. In the distributed design, although [19] presented a distributed MD dynamics with integral feedback, the result merely achieved optimal consensus and part variables turn to be unbounded.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The term softmax itself has been first introduced by Bridle in neural networks, where it is usually employed as an activation function to normalise data [52]. In computer science, applications of softmax are varied: classification methods (again, softmax regression) for supervised and unsupervised learning [53][54][55], computer vision [56][57][58], reinforcement learning [59][60][61] and hardware design [62], just to name some current areas of application. Additionally, a considerable number of conference papers is witnessing the popularity of softmax and its proposed variants [63][64][65][66][67].…”
Section: Plos Onementioning
confidence: 99%
“…This work is also related to a large body of literature on mirror descent (MD) algorithm [32], [33], and its variant dual averaging (DA), also known as lazy mirror descent [34], [35]. Both MD and DA have been extensively used in online convex optimization [34], [36], online learning for MDP's with changing rewards [17], regret minimization [7], as well as learning NE in continuous games [37]- [40]. Although MD and DA algorithms share similarities in their analysis, DA algorithms are believed to be more robust in the presence of noise, while MD algorithms often provide better convergence rates [35].…”
mentioning
confidence: 99%