Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-1170
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Example Generation with Syntactically Controlled Paraphrase Networks

Abstract: We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
473
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 518 publications
(474 citation statements)
references
References 41 publications
(33 reference statements)
1
473
0
Order By: Relevance
“…The works (Samanta & Mehta, 2017;Iyyer et al, 2018) start to craft adversarial sentences that grammatically correct and maintain the syntax structure of the original sentence. The work in (Samanta & Mehta, 2017) achieves this by using synonyms to replace original words, or adding some words which have different meanings in different contexts.…”
Section: Attacking Words and Lettersmentioning
confidence: 99%
See 1 more Smart Citation
“…The works (Samanta & Mehta, 2017;Iyyer et al, 2018) start to craft adversarial sentences that grammatically correct and maintain the syntax structure of the original sentence. The work in (Samanta & Mehta, 2017) achieves this by using synonyms to replace original words, or adding some words which have different meanings in different contexts.…”
Section: Attacking Words and Lettersmentioning
confidence: 99%
“…The work in (Samanta & Mehta, 2017) achieves this by using synonyms to replace original words, or adding some words which have different meanings in different contexts. On the other hand, the work (Iyyer et al, 2018) manages to fool the text classifier by paraphrasing the structure of sentences.…”
Section: Attacking Words and Lettersmentioning
confidence: 99%
“…The first two baselines directly output the corresponding syntactic or semantic input for each instance. For the last baseline, we consider SCPN (Iyyer et al, 2018). As SCPN requires parse trees for both the syntactic and semantic inputs, we follow the process in their paper and use the Stanford shiftreduce constituency parser to parse both, then use the parsed sentences as inputs to SCPN.…”
Section: Baselinesmentioning
confidence: 99%
“…These systems can be used in various application areas, such as text summarization (Fan et al, 2018), adversarial example generation (Iyyer et al, 2018), dialogue (Niu and Bansal, 2018), and data-to-document generation (Wiseman et al, 2018). However, prior work on controlled generation has typically assumed a known, finite set of values that the controlled attribute can take on.…”
Section: Introductionmentioning
confidence: 99%
“…Stylistic control is important as a way to address a well-known limitation of vanilla neural NLG models, namely that they reduce the stylistic variation seen in the input, and thus produce outputs that tend to be dull and repetitive (Li et al, 2016). The majority of other work on stylistic control has been done in a text-to-text setting where MRs and corpora with fixed meaning and varying style are not available (Fan et al, 2017;Iyyer et al, 2018;Wiseman et al, 2018;Ficler and Goldberg, 2017). Sometimes variation is evaluated in terms of model performance in some other task, such as machine translation or summarization.…”
Section: Related Workmentioning
confidence: 99%