It is general practice to use rater judgments in speaking proficiency testing. However, it has been shown that raters' knowledge and experience may influence their ratings, both in terms of leniency and varied focus on different aspects of speech.The purpose of this study is to identify raters' relative responsiveness to fluency and linguistic accuracy in an occupational context, and to investigate whether professional and non-professional raters with a broad exposure to L2 speech demonstrate similar responsiveness to these two aspects. To this end, an experimental approach was applied. Fluency and accuracy were separated and systematically manipulated. As it is known that foreign accentedness of speech influences raters' judgments, this factor was accounted for. Seventeen responses to a Dutch L2 exam in a vocational context were converted into four different versions manipulated for morphosyntactical accuracy and/or fluency, and read by a Dutch L2 actor, resulting in 68 stimuli. Fifty-five professional raters and 41 non-trained, potential stakeholders holistically rated all stimuli. All raters had extensive prior exposure to spoken L2 Dutch.Linear mixed modeling showed that improvement of either fluency or accuracy led to significantly higher ratings by both linguistically trained and non-trained raters. This finding confirms that both groups perceive these aspects to be important features of speaking proficiency. Raters seemed
research-article2017Article 502
Language Testing 35(4)to be more responsive to improvement of accuracy than of fluency. The linguistically non-trained raters seemed to appreciate the fluency improvement more than linguistically trained raters. The linguistically trained raters rewarded morpho-syntactical improvement relatively higher than the non-trained raters. This latter effect was explained by the finding that the linguistically trained raters seemed to be more preoccupied with accuracy, according to their responses to a questionnaire. This result suggests that raters with linguistic expertise were more attentive to accuracy whereas non-trained raters were relatively more attentive to fluency.