2017
DOI: 10.48550/arxiv.1707.02026
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Nested Attention Neural Hybrid Model for Grammatical Error Correction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 8 publications
0
9
0
Order By: Relevance
“…In Table 4, "SMT (with LM)" refers to (Junczys-Dowmunt and Grundkiewicz, 2014) "SMT Rule-Based Hybird" refers to (Felice et al, 2014); "SMT Classification Hybird" refers to (Rozovskaya and Roth, 2016); "Neural Hybird MT" refers to (Ji et al, 2017); "CNN + EO" refers to (Chollampatt and Ng, 2018) and "EO" means rerank with edit-operation features; "Transformer + MIMs" refers to and "MIMs" means model indepent methods; "NMT SMT Hybrid" refers to ; "CNN + FB Learning" refers to (Ge et al, 2018).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In Table 4, "SMT (with LM)" refers to (Junczys-Dowmunt and Grundkiewicz, 2014) "SMT Rule-Based Hybird" refers to (Felice et al, 2014); "SMT Classification Hybird" refers to (Rozovskaya and Roth, 2016); "Neural Hybird MT" refers to (Ji et al, 2017); "CNN + EO" refers to (Chollampatt and Ng, 2018) and "EO" means rerank with edit-operation features; "Transformer + MIMs" refers to and "MIMs" means model indepent methods; "NMT SMT Hybrid" refers to ; "CNN + FB Learning" refers to (Ge et al, 2018).…”
Section: Resultsmentioning
confidence: 99%
“…(Yannakoudakis et al, 2017) developed a neural sequence-labeling model for error detection to calculate the probability of each token in a sentence as being correct or incorrect, and then use the error detecting model's result as a feature to re-rank the N best hypotheses. (Ji et al, 2017) proposed a hybrid neural model incorporating both the word and character-level information. (Chollampatt and Ng, 2018) used a multilayer convolutional encoder-decoder neural network and outperforms all prior neural and statistical based systems on this task.…”
Section: Related Workmentioning
confidence: 99%
“…Our neural classification model outperforms the CUUI system and the deep context model, and has similar performance as (Ji et al, 2017), the first best fully neural MT method. Our hybrid method consisting of our neural classification model and the public SMT system (that is, replacing the classifier method in (Rozovskaya and Roth, 2016)) has a better performance with a 50.16 F 0.5 score.…”
Section: Overall Resultsmentioning
confidence: 76%
“…The final sentence-level representation c is then fed into a logistic regression layer to predict the category. Another type of hierarchical attention takes a top-down approach, an example of which is for grammatical error correction (Ji et al 2017). Consider a corrupted sentence: I have no enough previleges.…”
Section: (T) Imentioning
confidence: 99%