2022
DOI: 10.48550/arxiv.2206.12100
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

zPROBE: Zero Peek Robustness Checks for Federated Learning

Abstract: Privacy-preserving federated learning allows multiple users to jointly train a model with coordination of a central server. The server only learns the final aggregation result, thereby preventing leakage of the users' (private) training data from the individual model updates. However, keeping the individual updates private allows malicious users to perform Byzantine attacks and degrade the model accuracy without being detected. Best existing defenses against Byzantine workers rely on robust rank-based statisti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 23 publications
0
1
0
Order By: Relevance
“…In federated learning scenarios, ZKP can validate the legitimacy of data circulation and manipulation. For example, Ghodsi Z. et al proposed to generate proofs for local machine learning processes [16]. They encode machine learning protocols into the circuit program.…”
Section: Zero-knowledge Proofs and Zero-knowledge Virtual Machinementioning
confidence: 99%
“…In federated learning scenarios, ZKP can validate the legitimacy of data circulation and manipulation. For example, Ghodsi Z. et al proposed to generate proofs for local machine learning processes [16]. They encode machine learning protocols into the circuit program.…”
Section: Zero-knowledge Proofs and Zero-knowledge Virtual Machinementioning
confidence: 99%