2019
DOI: 10.48550/arxiv.1911.02134
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Asynchronous Online Federated Learning for Edge Devices with Non-IID Data

Abstract: Federated learning (FL) is a machine learning paradigm where a shared central model is learned across multiple distributed client devices while the training data remains on edge devices or local clients. Most prior work on federated learning uses Federated Averaging (FedAvg) as an optimization method for training in a synchronized fashion. This involves independent training at multiple edge devices with synchronous aggregation steps. However, the assumptions made by FedAvg are not realistic given the heterogen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
52
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(52 citation statements)
references
References 29 publications
0
52
0
Order By: Relevance
“…For federated learning on heterogeneous devices, previous work include an adaptive control algorithm that determines the best trade-off between local updates and global aggregation under a given resource constraint [38], model training in a network of heterogeneous edge devices, taking into account communication costs [39], and a method for straggler acceleration by dynamically masking neurons [40]. Aso-fed [41] presents an online learning algorithm that updates the central model in an asynchronous manner, tackling challenges associated with both varying computational loads at heterogeneous edge devices and stragglers. There has been a sparse but rapidly growing work in federated learning at edge devices, driven by the increasing numbers of such edge devices [42], [43], [38].…”
Section: Related Workmentioning
confidence: 99%
“…For federated learning on heterogeneous devices, previous work include an adaptive control algorithm that determines the best trade-off between local updates and global aggregation under a given resource constraint [38], model training in a network of heterogeneous edge devices, taking into account communication costs [39], and a method for straggler acceleration by dynamically masking neurons [40]. Aso-fed [41] presents an online learning algorithm that updates the central model in an asynchronous manner, tackling challenges associated with both varying computational loads at heterogeneous edge devices and stragglers. There has been a sparse but rapidly growing work in federated learning at edge devices, driven by the increasing numbers of such edge devices [42], [43], [38].…”
Section: Related Workmentioning
confidence: 99%
“…The authors in [5] demonstrate that asynchronous scheme is an attractive approach to mitigate stragglers in heterogeneous environments. As FL is more likely to be deployed in mobile wireless networks which are usually heterogeneous environment in the future, some researches on designing the asynchronous FL systems have been investigated [30,31]. In asynchronous FL, a node can download a global model from a central server and train a local model to upload in idle state at any time.…”
Section: Asynchronous Federated Learningmentioning
confidence: 99%
“…Another common challenge for federated learning and other decentralized learning approaches is the difference in data distributions present for different clients. [9,24,69] For this not independent and identically distributed (non-IID) data, model updates could counteract each other and hinder the training progress. [69] In this paper, we demonstrate for the first time how the seemingly diverse goals of distributed model training, model personalization as well as robustness against poisoning attacks, can be addressed by a single mechanism that is inspired by distributed ledgers and federated averaging.…”
Section: Introductionmentioning
confidence: 99%