2022
DOI: 10.32604/cmc.2022.019709
|View full text |Cite
|
Sign up to set email alerts
|

Towards Securing Machine Learning Models Against Membership Inference Attacks

Abstract: From fraud detection to speech recognition, including price prediction, Machine Learning (ML) applications are manifold and can significantly improve different areas. Nevertheless, machine learning models are vulnerable and are exposed to different security and privacy attacks. Hence, these issues should be addressed while using ML models to preserve the security and privacy of the data used. There is a need to secure ML models, especially in the training phase to preserve the privacy of the training datasets … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…The regularization technique aims at preventing model overfitting, which is a key factor in the success of MIA attacks. Several solutions to this approach are proposed, such as L2-norm regularization, data augmentation, and dropout [4], Adversarial Regularization [5]. The regularization technique not only interferes with the output of target models but also their internal parameters.…”
Section: Introductionmentioning
confidence: 99%
“…The regularization technique aims at preventing model overfitting, which is a key factor in the success of MIA attacks. Several solutions to this approach are proposed, such as L2-norm regularization, data augmentation, and dropout [4], Adversarial Regularization [5]. The regularization technique not only interferes with the output of target models but also their internal parameters.…”
Section: Introductionmentioning
confidence: 99%
“…Differential privacy (DP) is the standard for privacy-preserving statistical summaries [1]. Companies such as Microsoft [2], Google [3], Apple [4], and government organizations such as the US Census [5], have successfully applied DP in machine learning [6,7] and data sharing scenarios. The popularity of DP is due to its strong mathematical guarantees.…”
Section: Introductionmentioning
confidence: 99%
“…A model is trained under the best of optimized hyper parameters under the protection of the Pairing-Based Cryptography (PBC) library, which uses multiplicative cyclic groups of primes. Due to the member inference attack discussed in [4], malicious users could use the plaintext gradient to train a shadow model to compromise the data security of others. Hence, we introduce homomorphic encryption, which allows calculations to be performed on encrypted data without decryption.…”
Section: Introductionmentioning
confidence: 99%