2021
DOI: 10.1109/tpds.2020.3040887
|View full text |Cite
|
Sign up to set email alerts
|

An Efficiency-Boosting Client Selection Scheme for Federated Learning With Fairness Guarantee

Abstract: The issue of potential privacy leakage during centralized AI's model training has drawn intensive concern from the public. A Parallel and Distributed Computing (or PDC) scheme, termed Federated Learning (FL), has emerged as a new paradigm to cope with the privacy issue by allowing clients to perform model training locally, without the necessity to upload their personal sensitive data. In FL, the number of clients could be sufficiently large, but the bandwidth available for model distribution and re-upload is q… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
54
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 150 publications
(68 citation statements)
references
References 34 publications
0
54
0
Order By: Relevance
“…The loss function considered, though, is simple linear regression and does not readily apply to neural network models. Stochastic optimization is also considered for FL in [27] and [28], but not to design an optimal device selection policy that guarantees convergence of non-convex loss functions like we do here.…”
Section: R Wmentioning
confidence: 99%
“…The loss function considered, though, is simple linear regression and does not readily apply to neural network models. Stochastic optimization is also considered for FL in [27] and [28], but not to design an optimal device selection policy that guarantees convergence of non-convex loss functions like we do here.…”
Section: R Wmentioning
confidence: 99%
“…We follow a more practical approach based on the observation that membership-related information is only sensitive in the last DNN layer, making it vulnerable to MIAs as indicated in previous research [47,49,52,59]. [21,53].…”
Section: Classifiermentioning
confidence: 99%
“…[28] designs a client scheduling problem and provides a MAB-based framework for FL training without knowing the wireless channel state information and the dynamic usage of local computing resources. In order to minimize the latency, [29] models fair-guaranteed client selection as a Lyapunov optimization problem and presents a policy based on CC-MAB to estimate the model transmission time. A multi-agent MAB algorithm is developed to minimize the FL training latency over wireless channels, constrained by training performance as well as each client's differential privacy requirement in [30].…”
Section: Related Workmentioning
confidence: 99%