Background: Although teachers of English are required to assess students' speaking proficiency in the Common European Framework of Reference for Languages (CEFR), their ability to rate is seldom evaluated. The application of descriptors in the assessment of English speaking on CEFR in the context of English as a foreign language has not often been investigated, either. Methods: The present study first introduced a form of rater standardization training. Two trained raters then assessed the speaking proficiency of 100 learners by means of actual corpus data. The study then compared their rating results to evaluate interrater reliability. Next, ten samples of exact/adjacent agreement between Raters 1 and 2 were rated by six teachers of English in tertiary education. Two of them had attended rater standardization training with Raters 1 and 2, while the other four had not received any relevant training. Results: The two raters agreed exactly in 44% of cases. The rating results between the two trained raters were closely correlated (ρ = .893). Cross-tabulation showed that in one third of the samples, Rater 2 scored higher than Rater 1 and they agreed more often at the higher levels. The better rating performance of Teachers 1 and 2 suggested that rater standardization training may have helped enhance their performance. The unsatisfactory proportion of correctly assigned levels in teachers' ratings overall was probably due to the high input of subjective judgment based on vague CEFR descriptors. Conclusions: Regarding assessment, it is shown that the attendance of rater standardization training is of help in assessing learners' speaking proficiency in CEFR. This study provides a model for assessing data from spoken learner corpora, which adds an important attribute to future studies of learner corpora. The paper also raises doubts about teachers' ability to evaluate students' speaking proficiency in CEFR. As CEFR has been widely adopted in the relevant fields of English language teaching and assessment, it is suggested that the rating training framework established in this study, which uses learner corpus data, be offered to (prospective) teachers of English in tertiary education.
This corpus-based study examines the widely-used discourse marker well in Chinese-speaking learners’ speech and compares its frequencies in native speaker data and Swedish learners. While Swedish learners overuse well, Chinese-speaking learners (predominantly at the upper-intermediate level) significantly underuse it. The positions and functions of well are further examined using a functional framework. One-fourth of the Chinese-speaking learners who use well manipulate its positions in utterances in a similar way to native speakers. In terms of functions, well is employed for speech management much more frequently than for attitudinal purposes. The greater use of the former does not generally create negative effects, but the under-representation of the latter may suggest that Chinese-speaking learners sound too direct in certain contexts. The paper concludes by considering pedagogical implications for different first languages and proficiency levels and their possible applications to the instruction of well.
In spoken English, “I think” is a frequently-used chunk. The frequent use of “I think” in the Chinese non-native speakers' (NNSs') speech has been interpreted as being somewhat overused in previous studies, such as Xu and Xu (2007) and Yang and Wei (2005). The same phenomenon is also found in the present study, which is based on a detailed analysis of three corpora: The Spoken English Corpus of Chinese Learners (SECCL), MICASE and ICE-GB. “I think” is over-represented in the Chinese NNSs' speech in SECCL. However, it is questionable as to whether the Chinese NNSs use “I think” too much, and inappropriately. The investigation into the frequency information and contexts provides an explanation of the generic constraints and national backgrounds underlying the over-representation of “I think” in the speech of Chinese NNSs as well as revealing differences between Chinese NNSs and NSs.
Fluent L2 English speakers frequently use discourse markers (DMs) as a speech management strategy, but research has largely ignored how this develops across different proficiency levels and how it is related to immersive experiences. This study examines the developmental patterns of three DMs – well, you know and like – in the speech of learners at A2-C1 in CEFR with and without immersive experiences in target language environments. The fluency-rated LINDSEI corpus (173 learners) and a parallel native corpus (50 speakers) provided approximately 350,000 tokens and 3,395 instances of the analyzed DMs. Overall, DM frequency (especially with well and you know) among C1 speakers increases with rising fluency levels up to almost native-like levels. Immersive experience correlates positively with overall and individual DM frequency (except for like). As the skillful use of DMs results in more fluent speech production, the didactic implications for L2 instructors should be developed.
Learner corpus studies typically investigate the language of second-language learners with a different first language (L1) or with proficiency levels inferred from external criteria (e.g., the Louvain International Database of Spoken English Interlanguage, lindsei; Gilquin et al., 2010 ). This paper reports the process of expanding the original Czech ( Gráf, 2017 ) and Taiwanese ( Huang, 2014 ) sub-corpora (predominantly at B2 and C1; Huang et al., 2018 ) with samples from learners of other L1s across cefr levels. In addition to sixty interviews by the German, Finnish and Norwegian lindsei teams, another eighty-three interviews with university students in Taiwan and Finland were held. The data collection and transcription procedures were adapted from lindsei guidelines to ensure comparability. Each fourteen-minute interview was anonymised using Audacity, and orthographically transcribed and aligned by means of exmaralda. The levels of speaking proficiency in the supplemented data were assessed by two expert raters. The expanded learner corpus, containing 243 interviews, will be of considerable value for studying the development of learner English.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.