2019
DOI: 10.1007/978-3-030-23204-7_39
|View full text |Cite
|
Sign up to set email alerts
|

Improving Short Answer Grading Using Transformer-Based Pre-training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
55
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 87 publications
(60 citation statements)
references
References 19 publications
4
55
0
1
Order By: Relevance
“…We leave a multi-task formulation of our application setting for future work. Sung et al (2019) demonstrated state-of-the-art performance for similarity-based content scoring on the SemEval benchmark dataset . In this work, we use pre-trained transformer models for instance-based content scoring (cf.…”
Section: Related Workmentioning
confidence: 97%
“…We leave a multi-task formulation of our application setting for future work. Sung et al (2019) demonstrated state-of-the-art performance for similarity-based content scoring on the SemEval benchmark dataset . In this work, we use pre-trained transformer models for instance-based content scoring (cf.…”
Section: Related Workmentioning
confidence: 97%
“…More recently, advanced NLP techniques, such as neural networkbased distributed language representation learning approaches (e.g., word2vec) and transfer learning approaches (e.g., BERT), have been applied to short answer grading [34,44,45]. In massive open online courses (MOOCs), NLP techniques along with classification algorithms (e.g., logistic regression, random forest) have examined data from discussion forums for a wide range of tasks such as predicting students' learning outcomes, sentiment analysis [27], confusion detection [14], and cognitive presence [3,12].…”
Section: Natural Language Processing In Learning Analyticsmentioning
confidence: 99%
“…Specifically, bidirectional encoder representations from transformers (BERT), a pre-trained multilayer bidirectional transformer network (Vaswani et al, 2017) released by the Google AI Language team, have achieved state-of-the-art results in various NLP tasks, such as question answering, named entity recognition, natural language inference, and text classification (Devlin et al, 2019). BERT was also applied to AES (Rodriguez et al, 2019) and automated short-answer grading Sung et al, 2019) in 2019, and demonstrated good performance.…”
Section: Transformer-based Modelmentioning
confidence: 99%