2021
DOI: 10.48550/arxiv.2105.00872
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convergence Analysis and System Design for Federated Learning over Wireless Networks

Abstract: Federated learning (FL) has recently emerged as an important and promising learning scheme in IoT, enabling devices to jointly learn a model without sharing their raw data sets. However, as the training data in FL is not collected and stored centrally, FL training requires frequent model exchange, which is largely affected by the wireless communication network. Therein, limited bandwidth and random package loss restrict interactions in training. Meanwhile, the insufficient message synchronization among distrib… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…3) Minimization of total training time: The work in [22] minimized the total training time by optimizing communication and computation resource allocation; however, no compression was considered in [22]. The authors in [7] minimized the training time for a fixed communication rounds by solving a joint learning, wireless resource allocation, and device selection problem.…”
Section: A Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…3) Minimization of total training time: The work in [22] minimized the total training time by optimizing communication and computation resource allocation; however, no compression was considered in [22]. The authors in [7] minimized the training time for a fixed communication rounds by solving a joint learning, wireless resource allocation, and device selection problem.…”
Section: A Related Workmentioning
confidence: 99%
“…Besides, the edge devices in the edge network are usually heterogeneous such that some of edge devices with lower computation power become laggards in the synchronous model/gradient aggregation due to their longer computation time, which increases the per-round latency. It is necessary to consider the optimization of wireless resources over different edge devices to reduce the communication time of the lagging edge devices, and compensate the longer computation time [20]- [22]. With the goal of minimizing the total training time, we study the following question: How to balance the trade-off between the number of communication rounds and per-round latency via joint quantization level and bandwidth allocation optimization in the presence of device heterogeneity.…”
Section: Introductionmentioning
confidence: 99%