2019
DOI: 10.1145/3369816
|View full text |Cite
|
Sign up to set email alerts
|

Privacy Adversarial Network

Abstract: The remarkable success of machine learning has fostered a growing number of cloud-based intelligent services for mobile users. Such a service requires a user to send data, e.g. image, voice and video, to the provider, which presents a serious challenge to user privacy. To address this, prior works either obfuscate the data, e.g. add noise and remove identity information, or send representations extracted from the data, e.g. anonymized features. They struggle to balance between the service utility and data priv… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(8 citation statements)
references
References 31 publications
0
8
0
Order By: Relevance
“…(2015) IEEE Transactions on Computers Computer Science Libaque-Sáenz et al. (2020) Information & Management Information Systems & Management Liu et al. (2019) Procc.…”
Section: Methodology Developmentmentioning
confidence: 99%
“…(2015) IEEE Transactions on Computers Computer Science Libaque-Sáenz et al. (2020) Information & Management Information Systems & Management Liu et al. (2019) Procc.…”
Section: Methodology Developmentmentioning
confidence: 99%
“…We also validate our method's effectiveness in privacy preservation. To the best of our knowledge, there are no techniques that can provide personalized and compositional privacy protection in federated learning, therefore, we only select 2 types of data privacy-preserving baselines [29] and compare our framework's overall protection performance on all attributes with them for a fair comparison. A detailed introduction of these baselines is listed below: Noise perturbation: We train the base FedRec and apply Gaussian noise N (0, 𝜎 2 ) to trained private user embeddings in the server.…”
Section: Model Effectivenessmentioning
confidence: 99%
“…Privacy-preserving FL against inference attacks. Secure multi-party computation (Danner and Jelasity 2015;Mohassel and Zhang 2017;Bonawitz et al 2017;Melis et al 2019), adversarial training (Oh, Fritz, and Schiele 2017;Wu et al 2018;Madras et al 2018;Pittaluga, Koppal, and Chakrabarti 2019;Liu et al 2019;Kim et al 2019), model compression (Zhu, Liu, and Han 2019), and differential privacy (DP) (Pathak, Rane, and Raj 2010;Shokri and Shmatikov 2015;Hamm, Cao, and Belkin 2016;McMahan et al 2018;Geyer, Klein, and Nabi 2017;Wei et al 2020) are the four typical privacy-preserving FL methods. For example, Bonawitz et al (2017) design a secure multi-party aggregation for FL, where devices are required to encrypt their local models before uploading them to the server.…”
Section: Related Workmentioning
confidence: 99%
“…To mitigate the issue, various existing privacy-preserving FL methods can be adopted/adapted, including multi-party computation (MPC) (Danner and Jelasity 2015;Mohassel and Zhang 2017;Bonawitz et al 2017;Melis et al 2019), adversarial training (AT) (Madras et al 2018;Liu et al 2019;Li et al 2020a;Oh, Fritz, and Schiele 2017;Kim et al 2019), model compression (MC) (Zhu, Liu, and Han 2019), and differential privacy (DP) (Pathak, Rane, and Raj 2010;Shokri and Shmatikov 2015;Hamm, Cao, and Belkin 2016;McMahan et al 2018;Geyer, Klein, and Nabi 2017). However, these existing methods have key limitations, thus narrowing their applicability (see Table 1).…”
Section: Introductionmentioning
confidence: 99%