2023
DOI: 10.56553/popets-2023-0005
|View full text |Cite
|
Sign up to set email alerts
|

On the Privacy Risks of Deploying Recurrent Neural Networks in Machine Learning Models

Abstract: We study the privacy implications of training recurrent neural networks (RNNs) with sensitive training datasets. Considering membership inference attacks (MIAs)—which aim to infer whether or not specific data records have been used in training a given machine learning model—we provide empirical evidence that a neural network's architecture impacts its vulnerability to MIAs. In particular, we demonstrate that RNNs are subject to a higher attack accuracy than feed-forward neural network (FFNN) counterparts. Addi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 33 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?