2019
DOI: 10.48550/arxiv.1905.12774
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Quantifying the Privacy Risks of Learning High-Dimensional Graphical Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Depending on the goal of the attacker, we can classify three more attacks under the category of model inversion: membership inference, reconstruction attack, and property inference attack. Membership inference attacks (Truex et al, 2018;Hitaj et al, 2017;Murakonda et al, 2019) aim at determining whether a particular data instance was used for training. This is severe privacy issues when the instance directly maps to an identifiable individual, for instance, a medical records dataset.…”
Section: Related Workmentioning
confidence: 99%
“…Depending on the goal of the attacker, we can classify three more attacks under the category of model inversion: membership inference, reconstruction attack, and property inference attack. Membership inference attacks (Truex et al, 2018;Hitaj et al, 2017;Murakonda et al, 2019) aim at determining whether a particular data instance was used for training. This is severe privacy issues when the instance directly maps to an identifiable individual, for instance, a medical records dataset.…”
Section: Related Workmentioning
confidence: 99%