2019
DOI: 10.1109/tsp.2019.2894803
|View full text |Cite
|
Sign up to set email alerts
|

Asynchronous Saddle Point Algorithm for Stochastic Optimization in Heterogeneous Networks

Abstract: We consider expected risk minimization in multi-agent systems comprised of distinct subsets of agents operating without a common time-scale. Each individual in the network is charged with minimizing the global objective function, which is an average of sum of the statistical average loss function of each agent in the network. Since agents are not assumed to observe data from identical distributions, the hypothesis that all agents seek a common action is violated, and thus the hypothesis upon which consensus co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 53 publications
0
11
0
Order By: Relevance
“…Using the property of the double sum similar to the updates from ( 12) to (13) in the last term of the above equation,…”
Section: Lemma 3 Consider the Update Steps In Algorithm 2 With Learni...mentioning
confidence: 99%
See 2 more Smart Citations
“…Using the property of the double sum similar to the updates from ( 12) to (13) in the last term of the above equation,…”
Section: Lemma 3 Consider the Update Steps In Algorithm 2 With Learni...mentioning
confidence: 99%
“…Such a scenario arises in, for example, distributed multitask adaptive signal processing, where the weight vectors at neighboring nodes are not the same [10,11]. One of the first papers that has analyzed such departure from consensus optimization is [12] where the formulation included proximity constraints between neighboring nodes, which were handled through construction of Lagrangians and using saddle-point algorithms, and extended to the asynchronous setting in [13].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…There are a variety of distributed optimization algorithms proposed in the literature, such as primal methods [9], [10], [11], [12] and primal-dual methods [13], [14], [15]. The performance of distributed optimization algorithms is commonly characterized by their computation time and communication cost.…”
Section: Related Work and Contributionsmentioning
confidence: 99%
“…Many distributed optimisation methods have been proposed to overcome the challenges in large scale distributed machine learning, such as primal methods [13][14][15] and primal-dual methods [16][17][18], while, in federated learning settings, the communication cost often becomes dominant compared to the computation cost.…”
Section: Related Workmentioning
confidence: 99%