2021
DOI: 10.1007/s10956-020-09895-9
|View full text |Cite
|
Sign up to set email alerts
|

Using Machine Learning to Score Multi-Dimensional Assessments of Chemistry and Physics

Abstract: In response to the call for promoting three-dimensional science learning (NRC, 2012), researchers argue for developing assessment items that go beyond rote memorization tasks to ones that require deeper understanding and the use of reasoning that can improve science literacy. Such assessment items are usually performance-based constructed responses and need technology involvement to ease the burden of scoring placed on teachers. This study responds to this call by examining the use and accuracy of a machine le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
38
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

3
5

Authors

Journals

citations
Cited by 43 publications
(38 citation statements)
references
References 13 publications
0
38
0
Order By: Relevance
“…Many studies focusing on the use of ML techniques for students' assessment, relied on the validity of the work while undermining the technical and pedagogic features in evaluating student works in science subjects, using approaches such as text recognition, classification, and scoring [17]. Disagreements related to vocabulary between human and ML scoring are possible when assessing students' works [18]. To address this, NLP techniques can be used in the assessment process.…”
Section: Online Assessment Trendsmentioning
confidence: 99%
“…Many studies focusing on the use of ML techniques for students' assessment, relied on the validity of the work while undermining the technical and pedagogic features in evaluating student works in science subjects, using approaches such as text recognition, classification, and scoring [17]. Disagreements related to vocabulary between human and ML scoring are possible when assessing students' works [18]. To address this, NLP techniques can be used in the assessment process.…”
Section: Online Assessment Trendsmentioning
confidence: 99%
“…We know from recent review papers that science education research that uses ML has largely relied on supervised ML approaches (Zhai et al, 2020a(Zhai et al, ,b, 2021, largely in efforts to replicate and scale human codes. Several scholars have advanced the use of supervised ML approaches for science education assessment (e.g., Jescovitch et al, 2021;Maestrales et al, 2021;Nehm & Haertig, 2011;Nehm et al, 2011;Shiroda et al, 2021;Zhai et al, 2020b). Common to all of these approaches is the use of training and testing data as part of supervised ML.…”
Section: Supervised ML To Replicate and Scale Human Codingmentioning
confidence: 99%
“…As an example of such an approach, Maestrales et al (2021) examined the written responses from almost 7,000 high school students from the states of California and Michigan in the United States to four items drawn from the National Assessment of Education Progress (NAEP). First, the authors established the degree of human-human agreement for the use of coding frames for each of the four items; the coding frames had three levels: a) correct, b) incorrect, and c) correct in a way that also draws on what the authors termed "multi-dimensional" reasoning--reasoning that draws not only on an understanding of disciplinary core ideas but also science and engineering practices and crosscutting concepts.…”
Section: Supervised ML To Replicate and Scale Human Codingmentioning
confidence: 99%
“…To meet the vision, science educators have to engage students in practices to improve students' competence to construct explanations, figure out solutions, and solve problems. The articles in this special issue made substantial contributions by tapping into science learning that is embedded with such complex scientific practices such as modeling (Zhai et al, 2020c), scientific argumentation (Lee et al, 2021;Wang et al, 2020), investigation (Maestrales et al, 2021), multimodal representational thinking (Sung et al, 2020), explanation (Jescovitch et al, 2020), and epistemic knowledge of model-based explanation (Rosenberg & Krist, 2020). For example, in their study, Maestrales et al (2021) employed ML to automatically score students' performance by the dimension of science learning and achieved high scoring accuracy.…”
Section: Allows Assessment Practices To Target Complex Diverse and Structural Constructs And Thus Better Approaching The Science Learningmentioning
confidence: 99%
“…The articles in this special issue made substantial contributions by tapping into science learning that is embedded with such complex scientific practices such as modeling (Zhai et al, 2020c), scientific argumentation (Lee et al, 2021;Wang et al, 2020), investigation (Maestrales et al, 2021), multimodal representational thinking (Sung et al, 2020), explanation (Jescovitch et al, 2020), and epistemic knowledge of model-based explanation (Rosenberg & Krist, 2020). For example, in their study, Maestrales et al (2021) employed ML to automatically score students' performance by the dimension of science learning and achieved high scoring accuracy. Research both by Jescovitch et al (2020) and Wang et al (2020) aligned their machine scores with learning progressions, the developmental cognitive features of students' learning.…”
Section: Allows Assessment Practices To Target Complex Diverse and Structural Constructs And Thus Better Approaching The Science Learningmentioning
confidence: 99%