2022
DOI: 10.1049/cmu2.12507
|View full text |Cite
|
Sign up to set email alerts
|

Privacy‐preserving generative framework for images against membership inference attacks

Abstract: Machine learning has become an integral part of modern intelligent systems in all aspects of life. Membership inference attacks (MIAs), as the significant model attacks, also jeopardize the privacy of the intelligent systems. Previous works on defending MIAs concentrate on the model output perturbation or tampering with the training process. However, data and model reuse are common in intelligent systems, which results in the lack of scalability of previous defending works. This paper proposes a new privacy-pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 35 publications
0
2
0
Order By: Relevance
“…However, the method can only share noised gradients but not the whole noised graph data, which could perform less well in SN data utility preservation. Another paper 37 proposes a PPNE framework that can generate a privacy-preserving network embedding against private link inference attacks with an innovative quantification of privacy gain and utility loss. The research 38 develops a private link protection model SLPGE with adversarially regularized variational graph autoencoder (ARVGA) to defend against link inference attacks, intending to reduce the private information encoded in the graph embedding.…”
Section: Privacy-preserving Graph Publishing (Ppgp)mentioning
confidence: 99%
See 1 more Smart Citation
“…However, the method can only share noised gradients but not the whole noised graph data, which could perform less well in SN data utility preservation. Another paper 37 proposes a PPNE framework that can generate a privacy-preserving network embedding against private link inference attacks with an innovative quantification of privacy gain and utility loss. The research 38 develops a private link protection model SLPGE with adversarially regularized variational graph autoencoder (ARVGA) to defend against link inference attacks, intending to reduce the private information encoded in the graph embedding.…”
Section: Privacy-preserving Graph Publishing (Ppgp)mentioning
confidence: 99%
“…PPNE 37 : The model generates a privacy-preserving network embedding of original SN data to defend against link inference attacks with an innovative quantification of privacy and utility loss.…”
Section: Baseline Modelsmentioning
confidence: 99%