2020
DOI: 10.48550/arxiv.2004.14257
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Politeness Transfer: A Tag and Generate Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
31
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(31 citation statements)
references
References 0 publications
0
31
0
Order By: Relevance
“…For the first set of experiments, we rely on four models: Support Vector Machines (SVMs), Bidirectional Long Short-Term Memory Networks (BiLSTMs) with Self-Attention (Graves & Schmidhuber, 2005), BERT (Devlin et al, 2019), and Longformer (Beltagy et al, 2020). For the second set of experiments, we rely on four state of the art style transfer models representative of different methodologies, each representative of a different approach to automatically generate new examples with flipped labels (Hu et al, 2017;Li et al, 2018;Sudhakar et al, 2019;Madaan et al, 2020). To evaluate classifier performance on the resulting augmented data, we make use of SVM, Naive Bayes (NB), BiLSTM w/ SA, and BERT.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the first set of experiments, we rely on four models: Support Vector Machines (SVMs), Bidirectional Long Short-Term Memory Networks (BiLSTMs) with Self-Attention (Graves & Schmidhuber, 2005), BERT (Devlin et al, 2019), and Longformer (Beltagy et al, 2020). For the second set of experiments, we rely on four state of the art style transfer models representative of different methodologies, each representative of a different approach to automatically generate new examples with flipped labels (Hu et al, 2017;Li et al, 2018;Sudhakar et al, 2019;Madaan et al, 2020). To evaluate classifier performance on the resulting augmented data, we make use of SVM, Naive Bayes (NB), BiLSTM w/ SA, and BERT.…”
Section: Resultsmentioning
confidence: 99%
“…For Hu et al (2017),4 Sudhakar et al (2019),5 andMadaan et al (2020),6 we found the default hyperparameters used by the authors to work best on our task. In case ofLi et al (2018), 7 we followed the training schedule presented in the paper.…”
mentioning
confidence: 85%
“…What is a style in natural languages? Existing style transfer benchmarks primarily focus on individual high-level stylistic changes across sentiment (Shen et al, 2017), formality (Rao and Tetreault, 2018), politeness (Madaan et al, 2020), and writing styles (Jhamtani et al, 2017). Figure 1 provides some motivating examples to show that the high-level style transfers as commonly studied in existing benchmarks (e.g.…”
Section: Fine-grained Compositional Style Transfermentioning
confidence: 99%
“…Current benchmarks for style transfer focus on high-level style definitions such as transfer of sentiment (Shen et al, 2017;Lample et al, 2019;, politeness (Madaan et al, 2020), formality (Rao and Tetreault, 2018;Krishna et al, 2020), writing styles (Jhamtani et al, 2017;Syed et al, 2020;Jin et al, 2020) and some other styles (Kang and Hovy, 2019). However, these only focus on only high-level styles, unlike STYLEPTB.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation