2017
DOI: 10.1080/10556788.2016.1278445
|View full text |Cite
|
Sign up to set email alerts
|

Distributed optimization with arbitrary local solvers

Abstract: With the growth of data and necessity for distributed optimization methods, solvers that work well on a single machine must be re-designed to leverage distributed computation. Recent work in this area has been limited by focusing heavily on developing highly specific methods for the distributed environment. These special-purpose methods are often unable to fully leverage the competitive performance of their well-tuned and customized single machine counterparts. Further, they are unable to easily integrate impr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
166
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 172 publications
(171 citation statements)
references
References 58 publications
5
166
0
Order By: Relevance
“…Combining these two cases, we have proven (36). In the following, we define c c ν and b b ν for simplicity.…”
Section: Proof Of Propositionmentioning
confidence: 74%
“…Combining these two cases, we have proven (36). In the following, we define c c ν and b b ν for simplicity.…”
Section: Proof Of Propositionmentioning
confidence: 74%
“…Based on the solution, the UE updates the local reference a t [k] per (18), and if being selected by the AP, it sends out a global update ∆v t k via the allocated subchannel. • At the AP side, it selects a subgroup of UEs for update collection, decodes the received packet, and performs a global aggregation according to (20). The new global parameter is redistributed to all the associated UEs using an error free channel.…”
Section: Federated Learning In Wireless Networkmentioning
confidence: 99%
“…Therefore, ξ i (X T i w(α)) also has a bounded domain, indicating that by compactness we can find L ≥ 0 such that this continuous function is Lipschitz continuous within this domain. Moreover, the formulation (32) satisfies the form (29), so (21) holds by Lemma 1. Therefore, Assumption 3 is satisfied.…”
Section: Multi-class Classificationmentioning
confidence: 99%
“…The difference is more significant in the SVM problems, showing that exact line search has its advantage over backtracking, especially because its cost is low, while backtracking is still better than the fixed step size scheme. The reason behind is that although the approach of DisDCA provides a safe upper bound modeling of the objective difference such that the local updates can be directly applied to ensure the objective decrease, this upper bound might be too conservative as suggested in [29], but more aggressive upper bound modelings might be computationally impractical to obtain. On the other hand, our approach provides an efficient way to dynamically estimate how aggressive the updates can be, depending to the current iterate.…”
Section: Binary Linear Classificationmentioning
confidence: 99%