2020
DOI: 10.48550/arxiv.2010.01243
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies

Abstract: Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Several works have analyzed the convergence of federated learning by accounting of data heterogeneity, communication and computation limitations, and partial client participation. However, they assume unbiased client participation, where clients are selected at random or in proportion of their data sizes. In this paper, we present the first … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
118
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 63 publications
(120 citation statements)
references
References 17 publications
2
118
0
Order By: Relevance
“…Among these is [17], but only the convergence of simple linear regression loss is considered. In [18], the authors analyze the convergence of strongly convex loss functions, but unfortunately their bound introduces a nonvanishing term and thus their strategy is not guaranteed to converge to a stationary point of the loss function. Both [19] and [20] consider convergence, but only for strongly convex loss functions.…”
Section: R Wmentioning
confidence: 99%
See 1 more Smart Citation
“…Among these is [17], but only the convergence of simple linear regression loss is considered. In [18], the authors analyze the convergence of strongly convex loss functions, but unfortunately their bound introduces a nonvanishing term and thus their strategy is not guaranteed to converge to a stationary point of the loss function. Both [19] and [20] consider convergence, but only for strongly convex loss functions.…”
Section: R Wmentioning
confidence: 99%
“…The user-defined parameter V traditionally controls the trade-off between the average queue backlog and the gap from optimality, but since we do not have physical queues in our problem, the trade-off does not exist in the same way. Instead, V controls the speed of convergence in addition to the optimality gap in (18).…”
Section: Algorithm 2: Stochastic Client Samplingmentioning
confidence: 99%
“…Specifically, clients with "important" data would have higher probabilities to be sampled in each round. For example, existing works use clients' local gradient information (e.g., [25]- [27]) or local losses (e.g., [28]) to measure the importance of clients' data. However, these schemes did not consider the speed of error convergence with respect to wall-clock time, especially the straggling effect due to heterogeneous transmission delays.…”
Section: Related Workmentioning
confidence: 99%
“…Assumptions 1-3 are common in many existing studies of convex FL problems, such as 2 -norm regularized linear regression, logistic regression (e.g., [7], [18], [19], [25], [28], [42]). Nevertheless, the experimental results to be presented in Section VI show that our approach also works well for non-convex loss functions.…”
Section: A Machine Learning Model Assumptionsmentioning
confidence: 99%
See 1 more Smart Citation