2022
DOI: 10.1109/tpami.2022.3229593
|View full text |Cite
|
Sign up to set email alerts
|

Survey: Leakage and Privacy at Inference Time

Abstract: Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance since commercial and government applications of ML can draw on multiple sources of data, potentially including users' and clients' sensitive data. We provide a comprehensive survey of contemporary advances on several fronts, covering involuntary data leakage which is natural to ML models, potential malicious leakage which is caused by privacy attacks, and currently available defence mechanisms. We focus on i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 27 publications
(11 citation statements)
references
References 171 publications
0
11
0
Order By: Relevance
“…This is an active area of ML research not confined to TREs, but at present these risk measures focus on theoretical risk; that is, they describe the risk and how it may come about [ 20 ], but there is little evidence of how meaningful it is in operational research environments. Development is needed to identify (a) practical risk and (b) the necessary conditions for the risk to occur.…”
Section: Discussionmentioning
confidence: 99%
“…This is an active area of ML research not confined to TREs, but at present these risk measures focus on theoretical risk; that is, they describe the risk and how it may come about [ 20 ], but there is little evidence of how meaningful it is in operational research environments. Development is needed to identify (a) practical risk and (b) the necessary conditions for the risk to occur.…”
Section: Discussionmentioning
confidence: 99%
“…Since new attack methods with new attack goals have been actively proposed and studied, this paper does not go into detail and collectively refers to these types of attacks as information leakage attacks of training data. For details, see previous papers on surveys and taxonomies [136,145,146].…”
Section: Information Leakage Attacks Of Training Datamentioning
confidence: 99%
“…Survey literature. For technical details of attacks and defenses, see previous papers, e.g., [136,145,146,[162][163][164][165] Finally, this paper focuses on centralized (supervised) learning and does not deal with distributed learning. As for the information leakage attacks in federated learning, see, e.g., [166][167][168] 6.9 System-Level Vulnerabilities and Controls to Malicious Input to Systems…”
Section: A63: ML Componentmentioning
confidence: 99%
“…Naveed et al. , 2015; Jegorova et al. , Forthcoming), which focuses on how certain knowledge about data could be inferred from various sources such as query answers over data, statistics or machine learning models built over data, etc.…”
Section: Implications On Future Privacy Researchmentioning
confidence: 99%