Findings of the Association for Computational Linguistics: ACL 2023 2023
DOI: 10.18653/v1/2023.findings-acl.719
|View full text |Cite
|
Sign up to set email alerts
|

Membership Inference Attacks against Language Models via Neighbourhood Comparison

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…This includes the potential for models to memorise and replicate exact snippets of sensitive training data [60]. Furthermore, inference attacks, such as model inversion or membership inference attacks, can be used to extract detailed information about the training data or about individuals' data used in the training set, leading to privacy violations [61].…”
Section: Limitationsmentioning
confidence: 99%
“…This includes the potential for models to memorise and replicate exact snippets of sensitive training data [60]. Furthermore, inference attacks, such as model inversion or membership inference attacks, can be used to extract detailed information about the training data or about individuals' data used in the training set, leading to privacy violations [61].…”
Section: Limitationsmentioning
confidence: 99%