2020
DOI: 10.48550/arxiv.2011.13384
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Automatic coding of students' writing via Contrastive Representation Learning in the Wasserstein space

Abstract: Qualitative analysis of verbal data is of central importance in the learning sciences. It is labor-intensive and time-consuming, however, which limits the amount of data researchers can include in studies. This work is a step towards building a statistical machine learning (ML) method for achieving an automated support for qualitative analyses of students' writing, here specifically in score laboratory reports in introductory biology for sophistication of argumentation and reasoning. We start with a set of lab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…The entire pixel embedding based learning consist of two major stages: embedding encoding and feature clustering. The RSHN network, combining convolutional GRUs [2,12] (ConvGRUs) and the stacked hourglass network [21], is employed as the backbone network for our voxel embedding. In [25], each pixel in a 2D video sequence was encoded to a high-dimensional embedding vector with the intuition that all pixels from the same cell, across spatial and temporal, should have the same feature representation (embedding).…”
Section: Cosine Embedding Based Instance Segmentation and Trackingmentioning
confidence: 99%
“…The entire pixel embedding based learning consist of two major stages: embedding encoding and feature clustering. The RSHN network, combining convolutional GRUs [2,12] (ConvGRUs) and the stacked hourglass network [21], is employed as the backbone network for our voxel embedding. In [25], each pixel in a 2D video sequence was encoded to a high-dimensional embedding vector with the intuition that all pixels from the same cell, across spatial and temporal, should have the same feature representation (embedding).…”
Section: Cosine Embedding Based Instance Segmentation and Trackingmentioning
confidence: 99%
“…Among the unsupervised clustering algorithms, mean-shift [13] is arguably the one of the most widely used clustering algorithm in clustering problems, which has been used in image segmentation [8], voice processing [14], [15], object tracking [9] and vector embedding machine learning [16]. The advantage of mean-shift is a density-based(centroid-based) clustering approach and can determine the number of clusters adaptively.…”
Section: Introductionmentioning
confidence: 99%
“…Many studies also applied ensemble techniques like bagging, boosting on various text classification machine learning models to study student responses (Bertolini et al, 2021, Zhai et al, 2020a. Several studies have also used neural network models (Jiang et al, 2020;Luan et al, 2021;Rosenberg, 2021). However, to our knowledge, only a few studies for educational applications in general have leveraged Transformer-based machine learning models (Vaswani et al, 2017, Devlin et al, 2018, Raffel et al, 2019, Brown et al, 2020.…”
Section: Machine Learning Of Constructed Responsesmentioning
confidence: 99%