2021
DOI: 10.1109/access.2021.3056919
|View full text |Cite
|
Sign up to set email alerts
|

Client Selection for Federated Learning With Non-IID Data in Mobile Edge Computing

Abstract: Federated Learning (FL) has recently attracted considerable attention in internet of things, due to its capability of enabling mobile clients to collaboratively learn a global prediction model without sharing their privacy-sensitive data to the server. Despite its great potential, a main challenge of FL is that the training data are usually non-Independent, Identically Distributed (non-IID) on the clients, which may bring the biases in the model training and cause possible accuracy degradation. To address this… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 155 publications
(51 citation statements)
references
References 24 publications
1
40
0
Order By: Relevance
“…Furthermore, it can carry out high-efficiency machine learning among multiple participants or multiple computing nodes. Zhang et al [19] made a comprehensive review of recent research and achievements in federal learning and presented future development trends. First, data islands and privacy protection are described to introduce the background of federated learning, and the connotation and mechanism of federated learning are outlined.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, it can carry out high-efficiency machine learning among multiple participants or multiple computing nodes. Zhang et al [19] made a comprehensive review of recent research and achievements in federal learning and presented future development trends. First, data islands and privacy protection are described to introduce the background of federated learning, and the connotation and mechanism of federated learning are outlined.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, in the client selection algorithm, the power-ofchoice framework improves the convergence speed of the model and the effect is obvious. Zhang et al [6] mentioned the influence of the degree of nonindependent (non-IID) of client data on FL model training. The highly heterogeneous data distribution caused by non-IID data will bring bias in model training and may lead to a decrease in the accuracy of the FL model.…”
Section: Flmentioning
confidence: 99%
“…The highly heterogeneous data distribution caused by non-IID data will bring bias in model training and may lead to a decrease in the accuracy of the FL model. Therefore, Zhang et al [6] proposed a new FL method CSFedAvg. The CSFedAvg method uses the weight divergence to identify the non-IID degrees of clients, and CSFedAvg selects client update data with lower degree of non-IID according to the weight divergence to train the global model.…”
Section: Flmentioning
confidence: 99%
See 1 more Smart Citation
“…To position existing research attempts in optimizing the time-to-accuracy performance in FL, we propose a layered approach that categorizes them by the training phases at which they take effect: selection, configuration, or reporting ( § III). For the selection phase where the server chooses clients for participation, there are mainly two lines of optimization efforts: (1) one focuses on prioritizing clients either with high statistical utility or system utility [17], [18], and (2) the other explicitly considers both utilities and works out more informed solutions in response to client dynamics in reality [19], [20]. As for the configuration phase where the server sends the global model to selected clients with auxiliary configuration information and clients perform local training, we sort out four lines of work: (1) the first two lines advocate mitigating the communication cost by reducing model size [21]- [34] and decreasing synchronization frequency [35]- [39]; while (2) the last two lines of minimize computational overhead by shortening training latency in a round [40]- [44] as well as bringing down the number of training rounds [45]- [50].…”
Section: Introductionmentioning
confidence: 99%