2018
DOI: 10.1186/s40468-018-0069-0
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating CEFR rater performance through the analysis of spoken learner corpora

Abstract: Background: Although teachers of English are required to assess students' speaking proficiency in the Common European Framework of Reference for Languages (CEFR), their ability to rate is seldom evaluated. The application of descriptors in the assessment of English speaking on CEFR in the context of English as a foreign language has not often been investigated, either. Methods: The present study first introduced a form of rater standardization training. Two trained raters then assessed the speaking proficiency… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(11 citation statements)
references
References 21 publications
1
10
0
Order By: Relevance
“…It signifies that the severity practiced by the two groups was not identical. This finding is consistent with those of Attali (2016); Davis (2016) and Huang et al (2018), who reported that raters with varying rating experience provided heterogeneous ratings, even though the studies were implemented in different contexts. This consistency may be due to how rating experience among raters was operationally defined.…”
supporting
confidence: 91%
See 2 more Smart Citations
“…It signifies that the severity practiced by the two groups was not identical. This finding is consistent with those of Attali (2016); Davis (2016) and Huang et al (2018), who reported that raters with varying rating experience provided heterogeneous ratings, even though the studies were implemented in different contexts. This consistency may be due to how rating experience among raters was operationally defined.…”
supporting
confidence: 91%
“…Empirically, conflicting findings emerged from the literature in terms of how raters' experience has impacted rating quality. Raters of different experiences were reported to show distinct rating quality in some studies (Davis 2016;Huang et al 2018;Kim 2015), but differences were not observed in other studies (Ahmadi Shirazi, 2019;Isaacs & Thomson, 2013;Şahan & Razı, 2020).…”
Section: Introductionmentioning
confidence: 94%
See 1 more Smart Citation
“…It is not a common practice to evaluate and monitor the teacher's or rater's performance in IEPs (Huang, et al, 2018). It is thought by the author of this paper that there are some reasons behind it.…”
Section: Problemmentioning
confidence: 96%
“…It was learned that raters' severity remained consistent before and after the training yet level of agreement among raters improved at the end of the scoring sessions. Bijani (2018) and Huang, Kubelec, Keng, & Hsu (2018) have chosen samples from experienced raters and inexperienced raters. Both studies concluded that the two groups of raters were able to attain the same standard of inter-rater reliability after rater training was given.…”
Section: Factors Influencing Rater Accuracy In Language Testingmentioning
confidence: 99%