2020
DOI: 10.1109/tifs.2020.2997134
|View full text |Cite
|
Sign up to set email alerts
|

SecureNLP: A System for Multi-Party Privacy-Preserving Natural Language Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 65 publications
(13 citation statements)
references
References 39 publications
0
13
0
Order By: Relevance
“…In order to solve the above problems, some scholars use encryption methods such as homomorphic encryption [2][3][4][5] and secure multiparty computing [6][7][8][9] to encrypt model parameters sent by participants to resist model inference attacks. Another part of scholars proposed the method of differential privacy [10,11], which makes it difficult for attackers to infer the original model parameters by adding disturbance noise to the model parameters. In order to further improve privacy security, Qu et al [12] designed a hybrid identity mechanism based on digital signature and encryption protocol to prevent attackers from stealing data information stored in the blockchain.…”
Section: Introductionmentioning
confidence: 99%
“…In order to solve the above problems, some scholars use encryption methods such as homomorphic encryption [2][3][4][5] and secure multiparty computing [6][7][8][9] to encrypt model parameters sent by participants to resist model inference attacks. Another part of scholars proposed the method of differential privacy [10,11], which makes it difficult for attackers to infer the original model parameters by adding disturbance noise to the model parameters. In order to further improve privacy security, Qu et al [12] designed a hybrid identity mechanism based on digital signature and encryption protocol to prevent attackers from stealing data information stored in the blockchain.…”
Section: Introductionmentioning
confidence: 99%
“…In the broader context of NLP, secure multiparty computation (Feng et al, 2020) and homomorphic encryption (Al Badawi et al, 2020) have been used to provide strong privacy guarantees. Since these cryptographic methods incur high performance penalties (see (Riazi et al, 2019) for an overview of their performance in deep learning), more recent proposals have focused on the careful use of simpler cryptographic primitives while training a model over encrypted text due to confidentiality reasons.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, when the exchange of documents among the members of these computation scenario is not an advisable option, multi-party computation (MPC) can be successfully used. MPC is a cryptographic primitive that computes aggregated functions over multiple sources of data, which cannot be revealed [23]. Formally, MPC assumes a set of inputs { 1 , 2 , .…”
Section: Solutionsmentioning
confidence: 99%
“…Several privacy-preserving mechanisms have been developed to safeguard personal data from threats, as attacks, disclosures, and unintended usages. Some of these mechanisms, such as encryption [19,30], differential privacy (DP) [24], multi-party computation (MPC) [23], and federated learning (FL) [25,31], are implemented alongside models for enabling applications to safeguard data privacy. There are also some attacking methods that aim at retrieving data samples used to train models, mostly neural networks, which can be seen as a threat or a safety checker, depending on the attacker's intention.…”
Section: Introductionmentioning
confidence: 99%