2024
DOI: 10.1093/applin/amae048
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourced Comparative Judgement for Evaluating Learner Texts: How Reliable are Judges Recruited from an Online Crowdsourcing Platform?

Peter Thwaites,
Nathan Vandeweerd,
Magali Paquot

Abstract: Recent studies of proficiency measurement and reporting practices in applied linguists have revealed widespread use of unsatisfactory practices such as the use of proxy measures of proficiency in place of explicit tests. Learner corpus research is one specific area affected by this problem: few learner corpora contain reliable, valid evaluations of text proficiency. This has led to calls for the development of new L2 writing proficiency measures for use in research contexts. Answering this call, a recent study… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 36 publications
0
0
0
Order By: Relevance