Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security 2019
DOI: 10.1145/3319535.3354261
|View full text |Cite
|
Sign up to set email alerts
|

Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
140
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 154 publications
(141 citation statements)
references
References 29 publications
1
140
0
Order By: Relevance
“…The goal of our protocol is to protect the privacy of each party's gradient while guarantees the integrity of aggregation. However, we do not consider the adversary that makes queries to the trained model to launch black-box statistical attacks [22], [23], [24], [25], [26] since it is known hard to prevent the leakage from the output of the functionality implemented by cryptographic protocols. Moreover, such attacks might not work well to precisely infer the sensitive information of honest parties, especially for deep neural networks that generalize well [6].…”
Section: Adversarial Modelmentioning
confidence: 99%
“…The goal of our protocol is to protect the privacy of each party's gradient while guarantees the integrity of aggregation. However, we do not consider the adversary that makes queries to the trained model to launch black-box statistical attacks [22], [23], [24], [25], [26] since it is known hard to prevent the leakage from the output of the functionality implemented by cryptographic protocols. Moreover, such attacks might not work well to precisely infer the sensitive information of honest parties, especially for deep neural networks that generalize well [6].…”
Section: Adversarial Modelmentioning
confidence: 99%
“…• Exploration attacks (Sethi and Kantardzic, 2018); • Model extraction attacks (Correia-Silva et al, 2018;Kesarwani et al, 2018;Joshi and Tammana, 2019;Reith et al, 2019); • Model inversion attacks (Yang et al, 2019); • Model-reuse attacks (Ji et al, 2018); • Trojan attacks (Liu et al, 2018).…”
Section: Attacks On Cloud-hosted Machine Learning Models: Thematic Anmentioning
confidence: 99%
“…Second, we use the budget n ‐times query model Fc, that is, ||Dquery|=n. We assume that we can get all the predicted values (if not, it is feasible to use the cropping method mentioned in Reference [9]) to form the data set }{Fc(x),trueFˆ(x))|xDquery}. Analogously, we fix the sampling layer on Gθ and train the weight w between the first and second layers.…”
Section: Experimentationmentioning
confidence: 99%
“…The ||Daux|=q we use is drawn from the test data distribution. For the previous MIA 9 in the third row, we use the same settings. MIA, model inversion attack [Color figure can be viewed at wileyonlinelibrary.com]…”
Section: Experimentationmentioning
confidence: 99%
See 1 more Smart Citation