Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications 2018
DOI: 10.18653/v1/w18-3729
|View full text |Cite
|
Sign up to set email alerts
|

CYUT-III Team Chinese Grammatical Error Diagnosis System Report in NLPTEA-2018 CGED Shared Task

Abstract: This paper reports how we build a Chinese Grammatical Error Diagnosis system in the NLPTEA-2018 CGED shared task. In 2018, we sent three runs with three different approaches. The first one is a patternbased approach by frequent error pattern matching. The second one is a sequential labelling approach by conditional random fields (CRF). The third one is a rewriting approach by sequence to sequence (seq2seq) model. The three approaches have different properties that aim to optimize different performance metrics … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…Earlier work in CSC focus mainly on unsupervised methods such as language model with a pre-constructed confusionset Yu and Li, 2014). Subsequently, some work cast CSC as a sequential labeling problem, in which conditional random fields (CRF) (Lafferty et al, 2001), gated recurrent networks (Hochreiter and Schmidhuber, 1997;Chung et al, 2014) have been employed to model the problem (Zheng et al, 2016;Xie et al, 2017;Wu et al, 2018). More recently, motivated by a serials of remarkable suc-cess achieved by neural network-based sequenceto-sequence learning (Seq2Seq) in various natural language processing (NLP) tasks (Sutskever et al, 2014;, generative models have also been applied to the spelling check task by considering it as an encoder-decoder (Xie et al, 2016;Ge et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Earlier work in CSC focus mainly on unsupervised methods such as language model with a pre-constructed confusionset Yu and Li, 2014). Subsequently, some work cast CSC as a sequential labeling problem, in which conditional random fields (CRF) (Lafferty et al, 2001), gated recurrent networks (Hochreiter and Schmidhuber, 1997;Chung et al, 2014) have been employed to model the problem (Zheng et al, 2016;Xie et al, 2017;Wu et al, 2018). More recently, motivated by a serials of remarkable suc-cess achieved by neural network-based sequenceto-sequence learning (Seq2Seq) in various natural language processing (NLP) tasks (Sutskever et al, 2014;, generative models have also been applied to the spelling check task by considering it as an encoder-decoder (Xie et al, 2016;Ge et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…With the development of end-to-end networks, some work proposed to optimize the error correction performance directly as a sequence-labeling task with conditional random fields (CRF) (Wu et al, 2018) and recurrent neural networks (RNN) (Zheng et al, 2016;Yang et al, 2017). Wang et al (2019) used a sequence-to-sequence framework with copy mechanism to copy the correction results directly from a prepared confusion set for the erroneous words.…”
Section: Introductionmentioning
confidence: 99%