Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security 2019
DOI: 10.1145/3319535.3354211
|View full text |Cite
|
Sign up to set email alerts
|

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

Abstract: The arms race between attacks and defenses for machine learning models has come to a forefront in recent years, in both the security community and the privacy community. However, one big limitation of previous research is that the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain.In this paper, we take a step towards resolving this limitation by combining the two doma… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
157
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 191 publications
(159 citation statements)
references
References 40 publications
2
157
0
Order By: Relevance
“…In a concurrent work, Song et al [58] evaluate several different attacks that seek to extract membership information from robust models, showing that robustness can make a model more vulnerable to membership inference. Although the attacks that we present within our formal framework are similar to theirs, our experimental setup has a few major differences.…”
Section: Robustnessmentioning
confidence: 99%
“…In a concurrent work, Song et al [58] evaluate several different attacks that seek to extract membership information from robust models, showing that robustness can make a model more vulnerable to membership inference. Although the attacks that we present within our formal framework are similar to theirs, our experimental setup has a few major differences.…”
Section: Robustnessmentioning
confidence: 99%
“…There have been some recent works in analyzing the connection between membership inference and adversarial machine learning. Song et al [48] and Mejia et. al.…”
Section: Related Workmentioning
confidence: 96%
“…Finally, an approach by Nasr et al [47] consisted of using an adversarial algorithm to perform inference attacks against trained models. Song et al [48] proposed two new methods of exploiting the structural properties of adversarial conscious datasets, thus proving the investigated inference defences ineffective.…”
Section: Related Workmentioning
confidence: 99%