2018
DOI: 10.48550/arxiv.1807.00488
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Simple but Effective Classification Model for Grammatical Error Correction

Abstract: We treat grammatical error correction (GEC) as a classification problem in this study, where for different types of errors, a target word is identified, and the classifier predicts the correct word form from a set of possible choices. We propose a novel neural network based feature representation and classification model, trained using large text corpora without human annotations. Specifically we use RNNs with attention to represent both the left and right context of a target word. All feature embeddings are l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 10 publications
(7 reference statements)
0
1
0
Order By: Relevance
“…All recent neural approaches are sequential ED models that predict either word sequences (Zhao et al, 2019;Lichtarge et al, 2019) or character sequences (Xie et al, 2016) using either multi-layer RNNs (Ji et al, 2017; or CNNs (Chollampatt and Ng, 2018a;Ge et al, 2018a) or Transformers Lichtarge et al, 2019). Our sequence labeling formulation is similar to (Yannakoudakis et al, 2017) and (Kaili et al, 2018) but the former uses it to only detect errors and the latter only corrects five error-types using separate classifiers. Edits have been exploited in earlier GEC systems too but very unlike our method of re-architecting the core model to label input sequence with edits.…”
Section: Spell Correctionmentioning
confidence: 99%
“…All recent neural approaches are sequential ED models that predict either word sequences (Zhao et al, 2019;Lichtarge et al, 2019) or character sequences (Xie et al, 2016) using either multi-layer RNNs (Ji et al, 2017; or CNNs (Chollampatt and Ng, 2018a;Ge et al, 2018a) or Transformers Lichtarge et al, 2019). Our sequence labeling formulation is similar to (Yannakoudakis et al, 2017) and (Kaili et al, 2018) but the former uses it to only detect errors and the latter only corrects five error-types using separate classifiers. Edits have been exploited in earlier GEC systems too but very unlike our method of re-architecting the core model to label input sequence with edits.…”
Section: Spell Correctionmentioning
confidence: 99%