2020
DOI: 10.1111/emip.12410
|View full text |Cite
|
Sign up to set email alerts
|

A Rubric for the Detection of Students in Crisis

Abstract: For some students, standardized tests serve as a conduit to disclose sensitive issues of harm or distress that may otherwise go unreported. By detecting this writing, known as crisis papers, testing programs have a unique opportunity to assist in mitigating the risk of harm to these students. The use of machine learning to automatically detect such writing is necessary in the context of online tests and automated scoring. To achieve a detection system that is accurate, humans must first consistently label the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 4 publications
0
10
0
Order By: Relevance
“…Some prior work has directly explored the problem of detecting disturbing content in student responses. Earlier research at ACT produced a disturbing content pipeline based on an ensemble of multiple non-neural machine learning methods trained on selected Reddit posts (Burkhardt et al, 2017). A study by the American Institutes for Research built a classifier for a large internal dataset of constructed response and compared the performance of several varieties of recurrent neural network architectures.…”
Section: Disturbing Content Detectionmentioning
confidence: 99%
See 2 more Smart Citations
“…Some prior work has directly explored the problem of detecting disturbing content in student responses. Earlier research at ACT produced a disturbing content pipeline based on an ensemble of multiple non-neural machine learning methods trained on selected Reddit posts (Burkhardt et al, 2017). A study by the American Institutes for Research built a classifier for a large internal dataset of constructed response and compared the performance of several varieties of recurrent neural network architectures.…”
Section: Disturbing Content Detectionmentioning
confidence: 99%
“…The older version only uses Reddit sources, while MTL-Health is able to make predictions on varying input domains and problem formations. (Burkhardt et al, 2017). Whereas the old system used an ensemble of non-neural machine learning models all trained on one dataset, MTLHealth builds an ensemble that incorporates information from several different datasets into its predictions.…”
Section: Comparison To Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These alerts are identified via humans or automated engines and routed to schools for intervention to support the student. A recently published study described a process used to develop a three-tier rubric for labeling student writing as 'normal', 'concerning', or 'alert' and illustrated that both humans and automated engines can classify crisis alerts reliably (Burkhardt et al, 2021). While the classification of responses into crisis alert categories is critically important, understanding the rationale behind the classification is similarly important.…”
Section: Introductionmentioning
confidence: 99%
“…Even before the pandemic, the ML revolution has already impacted most, if not all, disciplines. For example, in educational measurement, researchers and practitioners have successfully applied ML in various areas, including content validity (Anderson et al., 2020), item/test development (Rafatbakhsh et al., 2021), test security (Ferrara, 2017), marking/scoring (Ercikan & McCaffrey, 2022), and crisis prediction (Burkhardt et al., 2020), to name a few. Meanwhile, various new ML‐based educational disciplines have emerged (e.g., educational data mining, learning analytics, and computational psychometrics; von Davier et al., 2021).…”
mentioning
confidence: 99%