2019
DOI: 10.1177/0265532219859881
|View full text |Cite
|
Sign up to set email alerts
|

Examining the assessment literacy required for interpreting score reports: A focus on educators of K–12 English learners

Abstract: This study investigated the assessment literacy required for K–12 educators to interpret score reports from a K–12 English language proficiency assessment. The assessment in concern is ACCESS for ELLs, which is an annual summative assessment that is delivered to nearly 2 million English learners (ELs) across 39 US states and territories. This study was conducted in two phases. In Phase 1, an online teacher survey, consisting of 15 items, was completed by 1437 participants; data were analyzed using descriptive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(22 citation statements)
references
References 20 publications
1
21
0
Order By: Relevance
“…Accordingly, empirical investigations of LAL have rarely incorporated learners. Studies on LAL have primarily focused on pre- and in-service language teachers (Fulcher, 2012; Kim et al, 2019; Koh et al, 2018; Lam, 2019; Lee, 2019; Levi & Inbar-Lourie, 2020; Vogt & Tsagari, 2014), higher education administrators (Baker, 2016; Deygers & Malone, 2019), policy makers (Pill & Harding, 2013), and language assessment course instructors (Jeong, 2013). Rare exceptions include Kremmel and Harding (2020) and Watanabe (2011).…”
Section: Introductionmentioning
confidence: 99%
“…Accordingly, empirical investigations of LAL have rarely incorporated learners. Studies on LAL have primarily focused on pre- and in-service language teachers (Fulcher, 2012; Kim et al, 2019; Koh et al, 2018; Lam, 2019; Lee, 2019; Levi & Inbar-Lourie, 2020; Vogt & Tsagari, 2014), higher education administrators (Baker, 2016; Deygers & Malone, 2019), policy makers (Pill & Harding, 2013), and language assessment course instructors (Jeong, 2013). Rare exceptions include Kremmel and Harding (2020) and Watanabe (2011).…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, another disadvantage is that the information presented in the graphs may increase the rater’s cognitive load (Sung et al, 2016), which could weaken the rater’s motivation to consistently use it (Chen & Tsao, 2021). Thus, the findings suggest that future design should improve algorithmic performance to reduce redundant information and optimize the representations of knowledge or the hierarchy of feedback to make them more accessible and digestible (Kim et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Interpretability of automated assessment has three major advantages. First, interpretability can improve the CSWA system’s validity since it requires explanations of the assessment criteria and the process about how the scores are generated (Kim et al, 2020; Ploegh et al, 2009). Second, interpretability helps instructors comprehend how the assessment works and thus reduces negative factors generated by assessors’ subjectivity (e.g., writing expertise, assessment experience) (Ade-Ibijola et al, 2012; Weideman, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Prior studies have found that most language teachers had insufficient LAL. Some reported that teachers incorrectly understood language assessment ( Kiomrs et al, 2011 ; Berry et al, 2017 ), did not acquire theoretical language assessment knowledge ( Mede and Atay, 2017 ; Xu and Brown, 2017 ; Kim et al, 2020 ), designed language assessment intuitively ( Sultana, 2019 ) or inappropriately interpreted students’ test results ( Kim et al, 2020 ).…”
Section: Empirical Studiesmentioning
confidence: 99%