2021
DOI: 10.1007/s10994-021-05951-6
|View full text |Cite
|
Sign up to set email alerts
|

Protect privacy of deep classification networks by exploiting their generative power

Abstract: Research showed that deep learning models are vulnerable to membership inference attacks, which aim to determine if an example is in the training set of the model. We propose a new framework to defend against this sort of attack. Our key insight is that if we retrain the original classifier with a new dataset that is independent of the original training set while their elements are sampled from the same distribution, the retrained classifier will leak no information that cannot be inferred from the distributio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 21 publications
(40 reference statements)
0
4
0
Order By: Relevance
“…To balance the contributions of three losses, we intentionally tone down the weights of the direction and distribution losses. In the future, we will try some automatic methods [52], [53], [54] to further optimize their weights. Fig.…”
Section: B Loss Functionsmentioning
confidence: 99%
“…To balance the contributions of three losses, we intentionally tone down the weights of the direction and distribution losses. In the future, we will try some automatic methods [52], [53], [54] to further optimize their weights. Fig.…”
Section: B Loss Functionsmentioning
confidence: 99%
“…However, this approach may not be effective when there is significant variation in the difficulty of learning across tasks [222]. Thus, various weighting strategies have been proposed and developed, such as uncertainty weights [230], gradient normalization [231], the dynamic weight average [227], the projecting conflicting gradient [232], impartial multitask learning [233], and random loss weighting [229].…”
Section: Multitask Learningmentioning
confidence: 99%
“…Adaptive weights are used to dynamically adjust the weights assigned to each loss term in the total loss function during training. This is implemented using the GradNorm method [56]. This is done in order to address the issue of gradient pathology, where loss terms with higher derivatives tend to dominate the total gradient vector and negatively affect the accuracy of the solution.…”
Section: Training Validation and Testing Datamentioning
confidence: 99%