2022 IEEE International Conference on Pervasive Computing and Communications Workshops and Other Affiliated Events (PerCom Work 2022
DOI: 10.1109/percomworkshops53856.2022.9767250
|View full text |Cite
|
Sign up to set email alerts
|

TinyFedTL: Federated Transfer Learning on Ubiquitous Tiny IoT Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(21 citation statements)
references
References 3 publications
0
21
0
Order By: Relevance
“…1) robust to laggards or client disconnections [190]; 2) achieves similar accuracy across all devices [190]; 3) includes device-specific model pruning to improve communication and training cost [192], [194]; 4) uses transfer learning or fine-tuning for local model updates to save memory and build personalized models [195], [197]; 5) uses knowledge distillation to aggregate class probabilities instead of weights [197].…”
Section: B Federated Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…1) robust to laggards or client disconnections [190]; 2) achieves similar accuracy across all devices [190]; 3) includes device-specific model pruning to improve communication and training cost [192], [194]; 4) uses transfer learning or fine-tuning for local model updates to save memory and build personalized models [195], [197]; 5) uses knowledge distillation to aggregate class probabilities instead of weights [197].…”
Section: B Federated Learningmentioning
confidence: 99%
“…5) Client Hardware and Supported Languages: Finally, the FL frameworks must support a wide variety of clients with different processors and operating languages. Among the different frameworks, Flower [190], PruneFL [194], and TinyFedTL [195] were tested on microcontrollers, supporting Python, Java, and C++.…”
Section: B Federated Learningmentioning
confidence: 99%
“…While TinyML can allow local execution of the models, FL can provide means for exploiting cross-device knowledge to improve the prediction model itself. To the best of our knowledge, the only attempt made to combine these two is in [26] and this is still an open area for future research.…”
Section: B Tinyml For Federated Learningmentioning
confidence: 99%
“…This procedure reduces the computational resources required to train a new model. In [89], the authors presented a method named federated transfer learning on tiny devices (TinyFedTL), whereby they implemented their own fully connected layer inference and backpropagation update between an Arduino Nano 33 BLE Sense microcontrollers and a local server. As a result, they managed to train an ML model without sending raw data to the server; only the weights and bias data had to be sent between the client nodes and the server.…”
Section: Emerging Techniques Of Tinymlmentioning
confidence: 99%