Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning 2020
DOI: 10.18653/v1/2020.deelio-1.3
|View full text |Cite
|
Sign up to set email alerts
|

Generalization to Mitigate Synonym Substitution Attacks

Abstract: Studies have shown that deep neural networks are vulnerable to adversarial examples -perturbed inputs that cause DNN-based models to produce incorrect results. One robust adversarial attack in the NLP domain is the synonym substitution. In attacks of this variety, the adversary substitutes words with synonyms. Since synonym substitution perturbations aim to satisfy all lexical, grammatical, and semantic constraints, they are difficult to detect with automatic syntax check as well as by humans. In this work, we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…That is, in the long term, the data points from the MC will look similar to the data points from [60], [59]. This attack technique has been used in multiple studies addressing machine learning robustness to adversarial attacks, including [59], [99]- [101]. The au-thors perform Metropolis-Hastings sampling which is designed with the guidance of gradients.…”
Section: ) Metropolis-hastings Sampling For Adversarial Attacksmentioning
confidence: 99%
“…That is, in the long term, the data points from the MC will look similar to the data points from [60], [59]. This attack technique has been used in multiple studies addressing machine learning robustness to adversarial attacks, including [59], [99]- [101]. The au-thors perform Metropolis-Hastings sampling which is designed with the guidance of gradients.…”
Section: ) Metropolis-hastings Sampling For Adversarial Attacksmentioning
confidence: 99%
“…This attack technique has been used in multiple studies addressing machine learning robustness to adversarial attacks, including [59], [99]- [101]. The way such a technique is used in those studies is almost identical.…”
Section: Breaching Security By Improving Attacksmentioning
confidence: 99%
“…The study of synonym substitution can be traced back to the 1970s (Waltz, 1978;Lehmann and Stachowitz, 1972). With the rise of machine learning, synonym substitution is widely used in NLP for data augment and adversarial attacks (Rizos et al, 2019;Wei and Zou, 2019;Ebrahimi et al, 2018;Alshemali and Kalita, 2020;Ren et al, 2019). Many adversarial attacks based on synonym substitution have successfully compromised the performance of existing models (Alzantot et al, 2018;Zhang et al, 2019a;Ren et al, 2019;.…”
Section: Synonym Substitution For Other Nlp Problemsmentioning
confidence: 99%