Findings of the Association for Computational Linguistics: EMNLP 2023 2023
DOI: 10.18653/v1/2023.findings-emnlp.244
|View full text |Cite
|
Sign up to set email alerts
|

TextMixer: Mixing Multiple Inputs for Privacy-Preserving Inference

Xin Zhou,
Yi Lu,
Ruotian Ma
et al.

Abstract: Pre-trained language models (PLMs) are often deployed as cloud services, enabling users to upload textual data and perform inference remotely. However, users' personal text often contains sensitive information, and sharing such data directly with the service providers can lead to serious privacy leakage. To address this problem, we introduce a novel privacy-preserving inference framework called TextMixer, which prevents plaintext leakage during the inference phase. Inspired by k-anonymity, TextMixer aims to ob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 26 publications
0
0
0
Order By: Relevance