Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1355
|View full text |Cite
|
Sign up to set email alerts
|

Integrating Transformer and Paraphrase Rules for Sentence Simplification

Abstract: Sentence simplification aims to reduce the complexity of a sentence while retaining its original meaning. Current models for sentence simplification adopted ideas from machine translation studies and implicitly learned simplification mapping rules from normalsimple sentence pairs. In this paper, we explore a novel model based on a multi-layer and multi-head attention architecture and we propose two innovative approaches to integrate the Simple PPDB (A Paraphrase Database for Simplification), an external paraph… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
83
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 91 publications
(85 citation statements)
references
References 19 publications
1
83
0
1
Order By: Relevance
“…For SMT-based models, Zhu et al (2010) adopt a tree-based SMT model for sentence simplification; Woodsend and Lapata (2011) propose a quasi-synchronous grammar and use integer linear programming to score the simplification rules; Wubben et al (2012) employ a phrase-based MT model to obtain candidates and re-rank them based on the dissimilarity to the complex sentence; Narayan and Gardent (2014) develop a hybrid model that performs sentence splitting and deletion first and then re-rank the outputs similar to Wubben et al (2012); Xu et al (2016) propose SBMT-SARI, a syntax-based machine translation framework that uses an external knowledge base to encourage simplification. On the other side, many NMT-based models have also been proposed for sentence simplification: Nisioi et al (2017) employ vanilla recurrent neural networks (RNNs) on text simplification; Zhang and Lapata (2017) propose to use reinforcement learning methods on RNNs to optimize a specific-designed reward based on simplicity, fluency and relevancy; incorporate memory-augmented neural networks for sentence simplification; Zhao et al (2018) integrate the transformer architecture and PPDB rules to guide the simplification learning; Sulem et al (2018b) combine neural MT models with sentence splitting modules for sentence simplification.…”
Section: Related Workmentioning
confidence: 99%
“…For SMT-based models, Zhu et al (2010) adopt a tree-based SMT model for sentence simplification; Woodsend and Lapata (2011) propose a quasi-synchronous grammar and use integer linear programming to score the simplification rules; Wubben et al (2012) employ a phrase-based MT model to obtain candidates and re-rank them based on the dissimilarity to the complex sentence; Narayan and Gardent (2014) develop a hybrid model that performs sentence splitting and deletion first and then re-rank the outputs similar to Wubben et al (2012); Xu et al (2016) propose SBMT-SARI, a syntax-based machine translation framework that uses an external knowledge base to encourage simplification. On the other side, many NMT-based models have also been proposed for sentence simplification: Nisioi et al (2017) employ vanilla recurrent neural networks (RNNs) on text simplification; Zhang and Lapata (2017) propose to use reinforcement learning methods on RNNs to optimize a specific-designed reward based on simplicity, fluency and relevancy; incorporate memory-augmented neural networks for sentence simplification; Zhao et al (2018) integrate the transformer architecture and PPDB rules to guide the simplification learning; Sulem et al (2018b) combine neural MT models with sentence splitting modules for sentence simplification.…”
Section: Related Workmentioning
confidence: 99%
“…Many previous studies (Specia, 2010;Wubben et al, 2012;Xu et al, 2016;Nisioi et al, 2017;Zhang and Lapata, 2017;Vu et al, 2018;Guo et al, 2018;Zhao et al, 2018) in text simplification have trained machine translators on a monolingual parallel corpus consisting of complex-simple sentence pairs without considering the level of each sentence. Therefore, these text simplification models are ignorant regarding the sentence level.…”
mentioning
confidence: 99%
“…This work has been shown to get state-of-the-art results in an automatic evaluation, training on the WikiLarge dataset introduced by Zhang and Lapata (2017). Zhao et al (2018), however, does not perform a human evaluation, and restricting evaluation to automatic metrics is generally insufficient for comparing simplification models. Our model, in comparison, is able to generate shorter and simpler sentences according to Flesch-Kincaid grade level (Kincaid et al, 1975) and human judgments, and provide a comprehensive analysis using human evaluation and a qualitative error analysis.…”
Section: Related Workmentioning
confidence: 99%
“…Following previous work (Zhang and Lapata, 2017;Zhao et al, 2018), we use SARI as our main automatic metric for evaluation (Xu et al, 2016). 11 Specifically, SARI calculates how often a generated sentence correctly keeps, inserts, and deletes n-grams from the complex sentence, using the reference simple standard as the gold-standard, where 1 ≤ n ≤ 4.…”
Section: Automatic Evaluationmentioning
confidence: 99%
See 1 more Smart Citation