2019
DOI: 10.48550/arxiv.1901.06796
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(20 citation statements)
references
References 96 publications
0
20
0
Order By: Relevance
“…Since the TextCNN model has good performances and is quite fast, it is one of the most widely used methods for text classification task in industrial applications [34]. As we aim to attack models used in practice, we take the TextCNN model [13] as our targeted model.…”
Section: Targeted Modelmentioning
confidence: 99%
“…Since the TextCNN model has good performances and is quite fast, it is one of the most widely used methods for text classification task in industrial applications [34]. As we aim to attack models used in practice, we take the TextCNN model [13] as our targeted model.…”
Section: Targeted Modelmentioning
confidence: 99%
“…Adversarial attacks (Yuan et al, 2019) in all application areas including computer vision (Akhtar and Mian, 2018;Khrulkov and Oseledets, 2018), natural language processing (Zhang et al, 2019b;Morris et al, 2020), and graphs (Sun et al, 2018) seek to reveal non-robustness of deep learning models. An adversarial attack on a text classification model perturbs the input sentence in such a way that the deep learning model is fooled, while the perturbations adhere to certain constraints, utilising morphology or grammar patterns or semantic similarity.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, the research and application of deep neural networks have become a prevalent domain within the academic field and a wide range of deep neural networks applications in solving real-world tasks have achieved good results, such as image classification [12] and natural language processing [29] as well as other fields. However, due to frequent and covert hacking activities, security and privacy of neural networks has gained attentions [5].…”
Section: Introductionmentioning
confidence: 99%