Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 2 2017
DOI: 10.18653/v1/e17-2004
|View full text |Cite
|
Sign up to set email alerts
|

Robust Training under Linguistic Adversity

Abstract: Deep neural networks have achieved remarkable results across many language processing tasks, however they have been shown to be susceptible to overfitting and highly sensitive to noise, including adversarial attacks. In this work, we propose a linguistically-motivated approach for training robust models based on exposing the model to corrupted text examples at training time. We consider several flavours of linguistically plausible corruption, include lexical semantic and syntactic methods. Empirically, we eval… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
2

Relationship

2
8

Authors

Journals

citations
Cited by 52 publications
(38 citation statements)
references
References 27 publications
0
38
0
Order By: Relevance
“…The notion of linguistically plausible corruption is also explored by Li et al (2017), who created adversarial examples with syntactic and semantic noise (reordering and word substitutions respectively). When training with these noisy datasets, they obtained better performance on several text classification tasks.…”
Section: Related Workmentioning
confidence: 99%
“…The notion of linguistically plausible corruption is also explored by Li et al (2017), who created adversarial examples with syntactic and semantic noise (reordering and word substitutions respectively). When training with these noisy datasets, they obtained better performance on several text classification tasks.…”
Section: Related Workmentioning
confidence: 99%
“…These adversarial attacks highlight often unintuitive model failure modes and present a challenge to deploying NLP models. Common attempts to mitigate the issue are adversarial training (Ebrahimi et al, 2018) and data augmentation (Belinkov and Bisk, 2018;Li et al, 2017), which lead to improved accuracy on adversarial examples. However, this might cause a false sense of security, as there is generally no guarantee that stronger adversaries could not circumvent defenses to find other successful attacks (Carlini and Wagner, 2017;Athalye et al, 2018;.…”
Section: Introductionmentioning
confidence: 99%
“…The first approach draws on our earlier work on learning robust text representations (Li et al, 2017), and is based on synonyms of the given token, based on Princeton WordNet (Miller et al, 1990) using the NLTK API (Bird, 2006). Here, we test possible synonyms, considering their part-ofspeech tag, asking the system s whether the loss is reduced after substitution.…”
Section: Token Substitution Methodsmentioning
confidence: 99%