2022
DOI: 10.48550/arxiv.2202.12275
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Partitioned Variational Inference: A Framework for Probabilistic Federated Learning

Abstract: The proliferation of computing devices has brought about an opportunity to deploy machine learning models on new problem domains using previously inaccessible data. Traditional algorithms for training such models often require data to be stored on a single machine with compute performed by a single node, making them unsuitable for decentralised training on multiple devices. This deficiency has motivated the development of federated learning algorithms, which allow multiple data owners to train collaboratively … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…son et al, 2016) to personalized FL. Finally, some prior works also consider applying EP to federated learning (Corinzia et al, 2019;Kassab & Simeone, 2022;Ashman et al, 2022), but mostly on relatively small-scale tasks. In this work, we instead discuss and empirically study various algorithmic considerations to scale up expectation propagation to contemporary benchmarks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…son et al, 2016) to personalized FL. Finally, some prior works also consider applying EP to federated learning (Corinzia et al, 2019;Kassab & Simeone, 2022;Ashman et al, 2022), but mostly on relatively small-scale tasks. In this work, we instead discuss and empirically study various algorithmic considerations to scale up expectation propagation to contemporary benchmarks.…”
Section: Related Workmentioning
confidence: 99%
“…However, exact posterior inference is in general intractable for even modestly-sized models and datasets and requires approx-However, scaling up classic expectation propagation to the modern federated setting is challenging due to the high dimensionality of model parameters and the large number of clients. Indeed, while there is some existing work on expectation propagation-based federated learning (Corinzia et al, 2019;Kassab & Simeone, 2022;Ashman et al, 2022), they typically focus on small models (fewer than 100K parameters) and few clients (at most 100 clients). In this paper we conduct an extensive empirical study across various algorithmic considerations to scale up expectation propagation to contemporary benchmarks (e.g., models with many millions of parameters and datasets with hundreds of thousands of clients).…”
Section: Introductionmentioning
confidence: 99%
“…In Federated Learning (FL), deterministic strategies like FedAvg and enhance convergence over heterogeneous datasets. As outlined in Chapter 2, Section 2.5.4, we employ Laplace approximation [51] for posterior estimation of local parameters before aggregation, a methodology paralleled by approaches utilizing Variational Inference [76,77] and other Bayesian frameworks like Variational Federated Learning [78,79,80], MCMC [81], and Gaussian Process [48].…”
Section: Bayesian Federated Learningmentioning
confidence: 99%