Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1097
|View full text |Cite
|
Sign up to set email alerts
|

Fluency Boost Learning and Inference for Neural Grammatical Error Correction

Abstract: Most of the neural sequence-to-sequence (seq2seq) models for grammatical error correction (GEC) have two limitations: (1) a seq2seq model may not be well generalized with only limited error-corrected data; (2) a seq2seq model may fail to completely correct a sentence with multiple errors through normal seq2seq inference. We attempt to address these limitations by proposing a fluency boost learning and inference mechanism. Fluency boosting learning generates fluency-boost sentence pairs during training, enablin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
92
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 107 publications
(94 citation statements)
references
References 40 publications
1
92
0
1
Order By: Relevance
“…Many recent advances in neural GEC aim at overcoming the mentioned data sparsity problem. Ge et al (2018a) proposed fluency-boost learning that generates additional training examples during training from an independent backward model or the forward model being trained. Xie et al (2018) sup-plied their model with noisy examples synthesized from clean sentences.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Many recent advances in neural GEC aim at overcoming the mentioned data sparsity problem. Ge et al (2018a) proposed fluency-boost learning that generates additional training examples during training from an independent backward model or the forward model being trained. Xie et al (2018) sup-plied their model with noisy examples synthesized from clean sentences.…”
Section: Related Workmentioning
confidence: 99%
“…Other recent work focuses on improving model inference. Ge et al (2018a) proposed correcting a sentence more than once through multi-round model inference. Lichtarge et al (2018) introduced iterative decoding to incrementally correct a sentence with a high-precision system.…”
Section: Related Workmentioning
confidence: 99%
“…• Word2vec. Recent works on learnable evaluation metrics use simple word embeddings such as word2vec and GLoVe as input to their models (Tao et al, 2018;Lowe et al, 2017;Kannan and Vinyals, 2017). Since these static embeddings have a fixed contextindependent representation for each word, they cannot represent the rich semantics of words in contexts.…”
Section: Word Embeddingsmentioning
confidence: 99%
“…The Referenced metric and Unreferenced metric Blended Evaluation Routine (RUBER) (Tao et al, 2018) stands out from recent work in automatic dialogue evaluation, relying minimally on human-annotated datasets of response quality for training. RUBER evaluates responses with a blending of scores from two metrics: • an Unreferenced metric, which computes the relevancy of a response to a given query inspired by Grice (1975)'s theory that the quality of a response is determined by its relatedness and appropriateness, among other properties.…”
Section: Introductionmentioning
confidence: 99%
“…(Ng et al, 2013(Ng et al, , 2014. In the past few years, both GEC-tuned statistical machine translation (SMT) and neural machine translation (NMT) using sequence-to- * Equally contributed authors sequence (seq2seq) learning have demonstrated to be more effective in grammatical error correction than other approaches Ng, 2017, 2018;Ge et al, 2018;Zhao et al, 2019).…”
Section: Introductionmentioning
confidence: 99%