2017
DOI: 10.20319/pijss.2017.32.437450
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Semantic Analysis Methods for Short Answer Grading Using Linear Regression

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…Broadly speaking, most of the existing ATS can be categorized into two categories (Bonthu, Rama Sree, and Krishna Prasad 2021; Uto, Xie, and Ueno 2020). One is often built upon traditional machine learning techniques e.g., Linear Regression (Nau, Haendchen Filho, and Passero 2017), Support Vector Machine (Gleize and Grau 2013), and Random Forests (Ishioka and Kameda 2017), whose performance is heavily dependent on the availability and quality of hand-crafted features such as the number of words contained in an answer (Platanios et al 2019) and the number of distinct words in the answer (Li et al 2016). The other is empowered by the recent deep learning techniques such as Bi-LSTM (Kim, Vizitei, and Ganapathi 2018) and BERT (Sung, Dhamecha, and Mukhi 2019), which can directly transform the raw text input as embedding-based representations to generate an assessment score without the need of manual feature engineering.…”
Section: Related Work Automatic Text Scoring In Educationmentioning
confidence: 99%
“…Broadly speaking, most of the existing ATS can be categorized into two categories (Bonthu, Rama Sree, and Krishna Prasad 2021; Uto, Xie, and Ueno 2020). One is often built upon traditional machine learning techniques e.g., Linear Regression (Nau, Haendchen Filho, and Passero 2017), Support Vector Machine (Gleize and Grau 2013), and Random Forests (Ishioka and Kameda 2017), whose performance is heavily dependent on the availability and quality of hand-crafted features such as the number of words contained in an answer (Platanios et al 2019) and the number of distinct words in the answer (Li et al 2016). The other is empowered by the recent deep learning techniques such as Bi-LSTM (Kim, Vizitei, and Ganapathi 2018) and BERT (Sung, Dhamecha, and Mukhi 2019), which can directly transform the raw text input as embedding-based representations to generate an assessment score without the need of manual feature engineering.…”
Section: Related Work Automatic Text Scoring In Educationmentioning
confidence: 99%
“…As surveyed in (Bonthu et al, 2021), the approaches used to tackle ASAS often fall into two categories. One is based on traditional machine learning techniques such as SVM (Gleize and Grau, 2013;Mohler et al, 2011;Higgins et al, 2014), K-means (Sorour et al, 2015, Linear Regression (Nau et al, 2017;Heilman and Madnani, 2015;Higgins et al, 2014), and Random Forests (Higgins et al, 2014;Ramachandran et al, 2015;Ishioka and Kameda, 2017), all of which heavily rely on the input of manually-crafted features. For example, Sultan et al ( 2016) devised a set of features which were based on a lexical similarity (i.e., similarities between words identified by a paraphrase database (Ganitkevitch et al, 2013)) and monolingual alignment (Sultan et al, 2014), and input the designed features to a ridge regression model to obtain the score of an answer.…”
Section: Automated Short Answer Scoringmentioning
confidence: 99%