2022
DOI: 10.1007/s40593-022-00290-6
|View full text |Cite|
|
Sign up to set email alerts
|

Utilizing a Pretrained Language Model (BERT) to Classify Preservice Physics Teachers’ Written Reflections

Abstract: Computer-based analysis of preservice teachers’ written reflections could enable educational scholars to design personalized and scalable intervention measures to support reflective writing. Algorithms and technologies in the domain of research related to artificial intelligence have been found to be useful in many tasks related to reflective writing analytics such as classification of text segments. However, mostly shallow learning algorithms have been employed so far. This study explores to what extent deep … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 59 publications
0
6
0
1
Order By: Relevance
“…Both training datasets were split into 70% training, 15% validation, 15% test (hold-out) for article classification ( Table 1 ) and NER tasks ( Table 2 ). Following common practice, the training sets were used to fine-tune the models on their respective tasks, the validation sets were used to compare the fine-tuned models for selection, and the test sets were used to evaluate how the models perform on unseen data [ 23 ].…”
Section: Methodsmentioning
confidence: 99%
“…Both training datasets were split into 70% training, 15% validation, 15% test (hold-out) for article classification ( Table 1 ) and NER tasks ( Table 2 ). Following common practice, the training sets were used to fine-tune the models on their respective tasks, the validation sets were used to compare the fine-tuned models for selection, and the test sets were used to evaluate how the models perform on unseen data [ 23 ].…”
Section: Methodsmentioning
confidence: 99%
“…These methods require less human effort, especially when a reliable scoring rubric has already been applied to many student responses (Haudek et al, 2011). However, the mentioned natural language processing techniques are built on the simplified assumption that the word order is irrelevant to the meaning of a sentence (Wulff et al, 2022b), which complicates the detection of implicit semantic embeddings. So, traditional ML models are only sensitive to key conceptual components, which is why we define construct assessment based on shallow learning experiences as the second level of the ML-adapted ECD.…”
Section: Evidence Spacementioning
confidence: 99%
“…So, future research needs to check whether such cutting-edge techniques can also be used to accurately evaluate short, content-rich scientific explanations. Maybe, such technologies will produce better outcome metrics than the n-gram approach; Dood et al (2022) and Winograd et al (2021bWinograd et al ( , 2021b in chemistry education research as well as Wulff et al (2022aWulff et al ( , 2022b in physics education research have laid the foundation for future research in this area.…”
Section: Validation Approachesmentioning
confidence: 99%
“…Für die Güte des Manuals spricht, dass inzwischen die automatisierte Zuordnung der Elemente möglich ist, welche auf den Codierungen durch das Manual Elemente basiert und eine gute Übereinstimmung zwischen Mensch und Maschine aufzeigt (Wulff et al, 2021a). Die Übereinstimmungswerte konnten durch die Verwendung großer Sprachmodelle verbessert werden (Wulff et al, 2022a). Die computerbasierte Analyse konnte auch erfolgreich zur Auswertung von Texten von Lehramtsstudierenden anderer Fächer, die eine Videovignette Physik reflektiert haben, angewendet werden (Wulff et al, 2023).…”
Section: Entwicklung Manual Elementeunclassified