Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2016
DOI: 10.18653/v1/p16-1068
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Text Scoring Using Neural Networks

Abstract: Automated Text Scoring (ATS) provides a cost-effective and consistent alternative to human marking. However, in order to achieve good performance, the predictive features of the system need to be manually engineered by human experts. We introduce a model that forms word representations by learning the extent to which specific words contribute to the text's score. Using Long-Short Term Memory networks to represent the meaning of texts, we demonstrate that a fully automated framework is able to achieve excellent… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
182
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 202 publications
(202 citation statements)
references
References 23 publications
3
182
0
Order By: Relevance
“…For example, Taghipour and Ng (2016) explore simple LSTM and CNN-based architectures with regression and evaluate on the ASAP-AES data. Alikaniotis et al (2016) train score-specific word embeddings with several LSTM architectures. Dong and Zhang (2016) demonstrate that a hierarchical CNN architecture produces strong results on the ASAP-AES data.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Taghipour and Ng (2016) explore simple LSTM and CNN-based architectures with regression and evaluate on the ASAP-AES data. Alikaniotis et al (2016) train score-specific word embeddings with several LSTM architectures. Dong and Zhang (2016) demonstrate that a hierarchical CNN architecture produces strong results on the ASAP-AES data.…”
Section: Introductionmentioning
confidence: 99%
“…The main difference is that both earlier work treat the essay script as a sequence of words rathter than a sequence of sentences. Alikaniotis et al (2016) use score-specific word embeddings as word features and take the last hidden state of LSTM as text representation. Taghipour and Ng (2016) take the average value over all the hidden states of LSTM as text representation.…”
Section: Text Representationmentioning
confidence: 99%
“…Recently, Alikaniotis et al (2016) employ a long short-term memory model to learn features for essay scoring task automatically without any predefined feature templates. It leverages scorespecific word embeddings (SSWEs) for word representations, and takes the last hidden states of a two-layer bidirectional LSTM for essay representations.…”
Section: Promptmentioning
confidence: 99%
“…The features used in previous work range from shallow textual features to discourse structure and semantic coherence (Higgins et al, 2004;Yannakoudakis and Briscoe, 2012;Somasundaran et al, 2014), and from prompt independent to dependent features (Cummins et al, 2016a). Some recent models have dispensed with feature engineering and utilised word embeddings and neural networks (Alikaniotis et al, 2016;Dong and Zhang, 2016;Taghipour and Ng, 2016).…”
Section: Related Workmentioning
confidence: 99%