2021
DOI: 10.48550/arxiv.2106.06042
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FedBABU: Towards Enhanced Representation for Federated Image Classification

Abstract: Federated learning has evolved to improve a single global model under data heterogeneity (as a curse) or to develop multiple personalized models using data heterogeneity (as a blessing). However, there has been little research considering both directions simultaneously. In this paper, we first investigate the relationship between them by analyzing Federated Averaging [31] at the client level and determine that a better federated global model performance does not constantly improve personalization. To elucidate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(28 citation statements)
references
References 30 publications
0
28
0
Order By: Relevance
“…We focus on the multi-task linear representation learning setting [35], which has become popular in recent years as it is an expressive but tractable nonconvex setting for studying the sample-complexity benefits of learning representations and the representation learning abilities of popular algorithms in data heterogeneous settings [11,13,56,26,54,12,52]. Remarkably, our study of FedAvg reveals that it can learn an effective representation even though it was not designed for this goal, unlike a variety of personalized FL methods specifically tailored for representation learning [11,32,2,41].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We focus on the multi-task linear representation learning setting [35], which has become popular in recent years as it is an expressive but tractable nonconvex setting for studying the sample-complexity benefits of learning representations and the representation learning abilities of popular algorithms in data heterogeneous settings [11,13,56,26,54,12,52]. Remarkably, our study of FedAvg reveals that it can learn an effective representation even though it was not designed for this goal, unlike a variety of personalized FL methods specifically tailored for representation learning [11,32,2,41].…”
Section: Related Workmentioning
confidence: 99%
“…and A 3,t,i (τ ),(45) follows by the fact that (I d − Bt,i,s−1 B t,i,s−1 )B * = dist(B t,i,s , B * ) ≤ 1.1 dist t by A 4,t,i (τ ). For the second term in(41), note that |ω…”
mentioning
confidence: 99%
“…To evaluate the personalized accuracy, each client has the same label distribution on local training and test data. Referred to Oh et al [49], we control FL environments with following hyperparameters: client fraction ratio f , local epochs τ , shards per user s, and Dirichlet concentration parameter β. f is the number of participating clients out of the total number of clients in every round and a small f is natural in the FL settings because the total number of clients is numerous. We use the linear warmup learning rate scheduler until 20 rounds, and after warm-up, the learning rate is scheduled via cosine learning rate scheduler that is initialized with 0.1.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…We use the linear warmup learning rate scheduler until 20 rounds, and after warm-up, the learning rate is scheduled via cosine learning rate scheduler that is initialized with 0.1. Other settings not mentioned are followed by Oh et al [49]. For the FedSup training, we use the number M of randomly sampled child model as equal to 3 and appy the inplace distillation.…”
Section: Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation