Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017) 2017
DOI: 10.18653/v1/k17-1017
|View full text |Cite
|
Sign up to set email alerts
|

Attention-based Recurrent Convolutional Neural Network for Automatic Essay Scoring

Abstract: Neural network models have recently been applied to the task of automatic essay scoring, giving promising results. Existing work used recurrent neural networks and convolutional neural networks to model input essays, giving grades based on a single vector representation of the essay. On the other hand, the relative advantages of RNNs and CNNs have not been compared. In addition, different parts of the essay can contribute differently for scoring, which is not captured by existing models. We address these issue… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
219
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 179 publications
(220 citation statements)
references
References 25 publications
0
219
0
1
Order By: Relevance
“…There are also more recent approaches for learning AES models that do not assume a set of predefined features. These approaches are based on deep architectures, and include (Alikaniotis et al, 2016;Taghipour and Ng, 2016;Riordan et al, 2017;Dong et al, 2017). Finally, there also models based on domain adaptation (Phandi et al, 2015) and unsupervised learning (Chen et al, 2010).…”
Section: Related Workmentioning
confidence: 99%
“…There are also more recent approaches for learning AES models that do not assume a set of predefined features. These approaches are based on deep architectures, and include (Alikaniotis et al, 2016;Taghipour and Ng, 2016;Riordan et al, 2017;Dong et al, 2017). Finally, there also models based on domain adaptation (Phandi et al, 2015) and unsupervised learning (Chen et al, 2010).…”
Section: Related Workmentioning
confidence: 99%
“…The combination of CNN and RNN has been adapted by several NLP tasks such as text summarization (Cheng and Lapata, 2016), essay scoring (Dong et al, 2017), sentiment analysis , or even reading comprehension (Dhingra et al, 2017). Unlike previous works that feed a sequence of sentences encoded by CNN to RNN, a sequence of utterances is encoded by CNN in our model, where each utterance is spoken by a distinct speaker and contains one or more sentences that are coherent in topics.…”
Section: Approachmentioning
confidence: 99%
“…The outputs of this layer are two matrices, L E ∈ R Se×We×d L for the essay and L A ∈ R Sa×Wa×d L for the article, where S e , S a , W e , W a , and d L are number of sentences of the essay and the article, length of sentences of the essay and the article, and the embedding size, respectively. Same to Dong et al (2017), a dropout is applied after the word embedding layer.…”
Section: Word Embedding Layermentioning
confidence: 99%
“…After the convolutional layer, a pooling layer is demanded to obtain the sentence representations. In this layer, we follow the same design presented by Dong et al (2017). The attention pooling is defined as equations below:…”
Section: Word Level Attention Pooling Layermentioning
confidence: 99%
See 1 more Smart Citation