Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security 2023
DOI: 10.1145/3605764.3623905
|View full text |Cite
|
Sign up to set email alerts
|

Information Leakage from Data Updates in Machine Learning Models

Tian Hui,
Farhad Farokhi,
Olga Ohrimenko

Abstract: In this paper we consider the setting where machine learning models are retrained on updated datasets in order to incorporate the most up-to-date information or reflect distribution shifts. We investigate whether one can infer information about these updates in the training data (e.g., changes to attribute values of records).Here, the adversary has access to snapshots of the machine learning model before and after the change in the dataset occurs. Contrary to the existing literature, we assume that an attribut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 11 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?