IEEE INFOCOM 2020 - IEEE Conference on Computer Communications 2020
DOI: 10.1109/infocom41043.2020.9155414
|View full text |Cite
|
Sign up to set email alerts
|

Enabling Execution Assurance of Federated Learning at Untrusted Participants

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 68 publications
(22 citation statements)
references
References 20 publications
0
22
0
Order By: Relevance
“…DarkneTZ's evaluation showed no more than 10% overhead in CPU, memory, and energy on edge-like devices, demonstrating its suitability for client-side model updates in FL. In an orthogonal direction, several works leveraged clients' TEEs for verifying the integrity of local model training [10,75], but did not consider privacy. Considering a broader range of attacks (e.g., DRAs and PIAs), it is essential to protect all layers instead of the last layers only, something that PPFL does.…”
Section: Privacy-preserving ML Using Teesmentioning
confidence: 99%
“…DarkneTZ's evaluation showed no more than 10% overhead in CPU, memory, and energy on edge-like devices, demonstrating its suitability for client-side model updates in FL. In an orthogonal direction, several works leveraged clients' TEEs for verifying the integrity of local model training [10,75], but did not consider privacy. Considering a broader range of attacks (e.g., DRAs and PIAs), it is essential to protect all layers instead of the last layers only, something that PPFL does.…”
Section: Privacy-preserving ML Using Teesmentioning
confidence: 99%
“…We will briefly review recent related work in two categories: (i) studies either federated learning (FedL) (e.g., [5,19,20,23,43,46]) or few-shot learning (FSL) (e.g., [9,16,22,24,[34][35][36]38]), or both of them [3] (ii) studies proposing similar ideas of minimizing model divergence to better learn individual models or an ensemble model.…”
Section: Related Workmentioning
confidence: 99%
“…They proposed a control algorithm that determines the best frequency of global aggregation with which computation and communication resources at the edge can be used efficiently in federated learning. Zhang et al [27] proposed building trustworthy federated learning systems using trusted execution environments (TEEs). Their main focus was to assure that the local training on clients side is being done correctly.…”
Section: Related Workmentioning
confidence: 99%