2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA) 2021
DOI: 10.1109/tpsisa52974.2021.00010
|View full text |Cite
|
Sign up to set email alerts
|

Can pre-trained Transformers be used in detecting complex sensitive sentences? - A Monsanto case study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…Vendor Identification; A Text Similarity task: Text-similarity techniques are not new to the researchers in the field of AA (Sapkota et al, 2013;Castro Castro et al, 2015;Rexha et al, 2018;Boenninghoff et al, 2019). However, with the recent success of transformers (Reimers and Gurevych, 2019a;Yang et al, 2019b;Jiang et al, 2022), researchers are now investigating the application of semantically meaningful representations for paraphrasing detection (Timmer et al, 2021;Olney, 2021;Ko and Choi, 2020), text summarization (Miller, 2019;Cai et al, 2022), semantic pars-ing (Ge et al, 2019;Ferraro and Suominen, 2020), question answering (Yang et al, 2019a;Vold and Conrad, 2021;Louis and Spanakis, 2021), and AA (Fabien et al, 2020;Li et al, 2020;Custódio and Paraboni, 2021;Uchendu et al, 2020b).…”
Section: Related Researchmentioning
confidence: 99%
“…Vendor Identification; A Text Similarity task: Text-similarity techniques are not new to the researchers in the field of AA (Sapkota et al, 2013;Castro Castro et al, 2015;Rexha et al, 2018;Boenninghoff et al, 2019). However, with the recent success of transformers (Reimers and Gurevych, 2019a;Yang et al, 2019b;Jiang et al, 2022), researchers are now investigating the application of semantically meaningful representations for paraphrasing detection (Timmer et al, 2021;Olney, 2021;Ko and Choi, 2020), text summarization (Miller, 2019;Cai et al, 2022), semantic pars-ing (Ge et al, 2019;Ferraro and Suominen, 2020), question answering (Yang et al, 2019a;Vold and Conrad, 2021;Louis and Spanakis, 2021), and AA (Fabien et al, 2020;Li et al, 2020;Custódio and Paraboni, 2021;Uchendu et al, 2020b).…”
Section: Related Researchmentioning
confidence: 99%
“…Recent use of fine-tuning on pre-trained Transformer language models has proven effective on a range of NLP tasks, and we have subsequently tested this idea on sensitive information detection. In [45], we experiment with the finetuned Bidirectional Encoder Representations from Transformers (BERT) [46]. This method is graphically represented in Figure 7.…”
Section: A Creating Documentsmentioning
confidence: 99%