Proceedings of the Thirteenth Workshop on Innovative Use of NLP For Building Educational Applications 2018
DOI: 10.18653/v1/w18-0549
|View full text |Cite
|
Sign up to set email alerts
|

Co-Attention Based Neural Network for Source-Dependent Essay Scoring

Abstract: This paper presents an investigation of using a co-attention based neural network for source-dependent essay scoring. We use a coattention mechanism to help the model learn the importance of each part of the essay more accurately. Also, this paper shows that the coattention based neural network model provides reliable score prediction of source-dependent responses. We evaluate our model on two source-dependent response corpora. Results show that our model outperforms the baseline on both corpora. We also show … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(32 citation statements)
references
References 19 publications
0
32
0
Order By: Relevance
“…(Bryant & Briscoe, 2018) Essay NLP Evaluation This paper re-examines language models (LMs) in grammatical error correction (GEC) and show that it is entirely possible to build a simple system. (Zhang & Litman, 2018) Essay Others Evaluation This paper presents an investigation of using a coattention based neural network for source-dependent essay scoring. (Horbach, Stennmanns, & Zesch, 2018) Essay…”
Section: Others Supportmentioning
confidence: 99%
“…(Bryant & Briscoe, 2018) Essay NLP Evaluation This paper re-examines language models (LMs) in grammatical error correction (GEC) and show that it is entirely possible to build a simple system. (Zhang & Litman, 2018) Essay Others Evaluation This paper presents an investigation of using a coattention based neural network for source-dependent essay scoring. (Horbach, Stennmanns, & Zesch, 2018) Essay…”
Section: Others Supportmentioning
confidence: 99%
“…Concerns about specific content extends to other cases where the scoring system needs to pay attention to details of genre and task -not all essays are five-paragraph persuasive essays; the specific task might require assessing whether the student has appropriately used specific source materials (Beigman Klebanov et al, 2014;Rahimi et al, 2017;Zhang and Litman, 2018) or assessing narrative (Somasundaran et al, 2018) or reflective (Beigman Klebanov et al, 2016a;Luo and Litman, 2016), rather than persuasive, writing.…”
Section: Contentmentioning
confidence: 99%
“…Our work differs from prior efforts primarily in the particular architecture that we use. Most prior work uses LSTMs (Farag et al, 2018;Wang et al, 2018;Cummins and Rei, 2018) or a combination LSTMs and CNNs (Taghipour and Ng, 2016;Zhang and Litman, 2018), cast as linear or logistic regression problems. In contrast, we use a hierarchically structured model with ordinal regression.…”
Section: Related Workmentioning
confidence: 99%
“…These problems (and the success of deep learning in other areas of language processing) have led to the development of neural methods for automatic essay scoring, moving away from feature engineering. A variety of studies (mostly LSTM-based) have reported AES performance comparable to or better than feature-based models (Taghipour and Ng, 2016;Cummins and Rei, 2018;Wang et al, 2018;Jin et al, 2018;Farag et al, 2018;Zhang and Litman, 2018). However, the current state-of-the-art models still use a combination of neural models and hand-crafted features (Liu et al, 2019).…”
Section: Introductionmentioning
confidence: 99%