2020
DOI: 10.48550/arxiv.2012.02670
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unleashing the Tiger: Inference Attacks on Split Learning

Abstract: We investigate the security of split learning-a novel collaborative machine learning framework that enables peak performance by requiring minimal resources consumption. In the paper, we make explicit the vulnerabilities of the protocol and demonstrate its inherent insecurity by introducing general attack strategies targeting the reconstruction of clients' private training sets. More prominently, we demonstrate that a malicious server can actively hijack the learning process of the distributed model and bring i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 16 publications
0
9
0
Order By: Relevance
“…In this paper, we presented SplitGuard, a method for SplitNN clients to detect if they are being targeted by a training-hijacking attack [11] or not. We described the theoretical foundations underlying SplitGuard, experimentally evaluated its effectiveness, and discussed at depth many issues related to its use.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…In this paper, we presented SplitGuard, a method for SplitNN clients to detect if they are being targeted by a training-hijacking attack [11] or not. We described the theoretical foundations underlying SplitGuard, experimentally evaluated its effectiveness, and discussed at depth many issues related to its use.…”
Section: Discussionmentioning
confidence: 99%
“…when success in the original task implies failure in the new task). We demonstrate using three commonly used benchmark datasets (MNIST [12], Fashion-MNIST [13], and CIFAR10 [14]) that SplitGuard effectively detects and mitigates the only training-hijacking attack proposed so far [11]. We further argue that it is generalizable to any such training-hijacking attack.…”
Section: Introductionmentioning
confidence: 87%
See 2 more Smart Citations
“…In the first in-depth security analysis of SplitNN, Pasquini et al [18] showed that it is possible for an honest-but-curious server to obtain the clients' private training data during the training phase. Their attack relies on the server's ability to manipulate the client during the training process by propagating back loss values that are unrelated to the original task, but aid the server in its pursuit.…”
Section: Related Workmentioning
confidence: 99%