2021
DOI: 10.48550/arxiv.2107.06917
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Field Guide to Federated Optimization

Abstract: Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection. The distributed learning process can be formulated as solving federated optimization problems, which emphasize communication efficiency, data heterogeneity, compatibility with privacy and system requirements, and other constraints that are not primary considerations in other problem settings. This paper provides recommendation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
82
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 68 publications
(91 citation statements)
references
References 165 publications
(242 reference statements)
0
82
0
Order By: Relevance
“…Federated Learning. Federated learning (FL) distributes machine learning model to the resource-constrained edges from which data originate, emerged as a promising alternative machine learning paradigm [23,25,33,34]. FL enables a multitude of participants to construct a joint model without sharing their private training data [4,22,23,25].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Federated Learning. Federated learning (FL) distributes machine learning model to the resource-constrained edges from which data originate, emerged as a promising alternative machine learning paradigm [23,25,33,34]. FL enables a multitude of participants to construct a joint model without sharing their private training data [4,22,23,25].…”
Section: Related Workmentioning
confidence: 99%
“…Although there are various FL frameworks nowadays, a most general FL paradigm consists of the following steps: (1) the server sends the global model to selected clients in each communication round, (2) each selected client trains the local model with its private data, (3) the clients send their trained local models back to the server, and (4) the server aggregates the local models to update the * Under review global model and repeats the first step until the global model converges. However, the FL paradigm is still a general definition and would face many challenges in practice [9,34]. One of the urgent challenges of FL is heterogeneity that includes both data heterogeneity and system heterogeneity.…”
Section: Introductionmentioning
confidence: 99%
“…Other variants of FedAvg include letting clients run different number of steps per round, or average the client states nonuniformly. We refer readers to Wang et al (2021) for a more comprehensive survey of these extensions.…”
Section: Related Workmentioning
confidence: 99%
“…The central server communicates with the clients to train a machine learning model using the local data stored on the clients. Federated learning is often modeled as a distributed optimization problem (Konečný et al, 2016a,b;McMahan et al, 2017;Kairouz et al, 2019;Wang et al, 2021). Let D be the entire dataset distributed across all N clients/devices/workers/machines, where each client i has a local dataset D i .…”
Section: Introductionmentioning
confidence: 99%