2022
DOI: 10.48550/arxiv.2205.10162
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AutoFedNLP: An efficient FedNLP framework

Abstract: Transformer-based pre-trained models have revolutionized NLP for superior performance and generality. Fine-tuning pre-trained models for downstream tasks often requires private data, for which federated learning is the de-facto approach (i.e., FedNLP). However, our measurements show that FedNLP is prohibitively slow due to the large model sizes and the resultant high network/computation cost. Towards practical FedNLP, we identify as the key building blocks adapters, small bottleneck modules inserted at a varie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 46 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?