IEEE INFOCOM 2019 - IEEE Conference on Computer Communications 2019
DOI: 10.1109/infocom.2019.8737416
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning

Abstract: Federated learning, i.e., a mobile edge computing framework for deep learning, is a recent advance in privacypreserving machine learning, where the model is trained in a decentralized manner by the clients, i.e., data curators, preventing the server from directly accessing those private data from the clients. This learning mechanism significantly challenges the attack from the server side. Although the state-ofthe-art attacking techniques that incorporated the advance of Generative adversarial networks (GANs) … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
348
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 690 publications
(350 citation statements)
references
References 18 publications
1
348
1
Order By: Relevance
“…We observe that Lemma 2 is a direct consequence of the solution structure derived in (28). Hence, we conclude the proof.…”
Section: B Stackelberg Equilibrium: Algorithm and Solution Approachsupporting
confidence: 70%
See 1 more Smart Citation
“…We observe that Lemma 2 is a direct consequence of the solution structure derived in (28). Hence, we conclude the proof.…”
Section: B Stackelberg Equilibrium: Algorithm and Solution Approachsupporting
confidence: 70%
“…The idea in this work was to adapt the communication period such that it minimizes the optimization error at each wall-clock time. To this end, interestingly, in some of the latest works such as [28], the authors have well-studied and demonstrated the privacy risk scenario under collaborated learning mechanism such as FL.…”
Section: Related Workmentioning
confidence: 99%
“…Information leakage: By definition, FL systems avoid sharing healthcare data among participating institutions. However, the shared information may still indirectly expose private data used for local training, e.g., by model inversion 60 of the model updates, the gradients themselves 61 or adversarial attacks 62,63 . FL is different from traditional training insofar as the training process is exposed to multiple parties, thereby increasing the risk of leakage via reverseengineering if adversaries can observe model changes over time, observe specific model updates (i.e., a single institution's update), or manipulate the model (e.g., induce additional memorisation by others through gradient-ascent-style attacks).…”
Section: Technical Considerationsmentioning
confidence: 99%
“…Future research should thus seek to explore the application of DLT-based federated learning to more complex AI models and investigate ways to reduce the induced performance overhead in real-world application scenarios. Furthermore, despite increased confidentiality, research has also shown that federated learning is potentially vulnerable to inference attacks, whereby an adversary can aim to extract information about private training data by inferring the AI model multiple times (Melis et al 2019;Wang et al 2019). In addition to employing DLT for preserving training data provenance and AI model integrity, future research should therefore also explore how DLT could help with preventing inference attacks on federated learning networks.…”
Section: Dlt-based Federated Learningmentioning
confidence: 99%