Various approaches help determine students' need for language support courses. Some programs use standardized language proficiency test scores, which are available from the admissions process; others use local placement tests, which are typically designed to be aligned with local language needs and ESL course curricula. Using local placement tests may be more defensible, but sufficient resources to develop, administer, and score valid assessments are often not available. Another approach is using standardized test scores to screen out students with a high likelihood of passing the local placement test, followed by local placement tests for students who need further evaluation. This approach has potential for placements with limited use of resources, but little research has been conducted on its effectiveness. This study aimed to determine the extent to which appropriate cut scores for a standardized test (TOEFL iBT Speaking) could be used to identify students who would have a high likelihood of passing the local English Placement Test of Oral Communication (EPT OC), which was designed to determine the need for taking an oral communication course. Teacher judgments of 136 students placed in the oral communication class indicated that only five did not need it. A TOEFL iBT score of 22 was found to be a reasonable cut score for exemption from the oral communication class.
Studies of interaction in speaking assessment have highlighted problems regarding the unequal distribution of interaction patterns in different task types. However, little attempt has been made to include both verbal and nonverbal interaction features elicited in the tasks. Therefore, this study examined the elicitation of verbal and nonverbal interaction in different task types by investigating which types of interaction features raters noticed when rating interaction across interview and paired discussion tasks. The study analyzed the use of interaction features in 32 verbal reports from four raters who commented on interaction features that affected their judgment. The findings of the study suggest that both verbal and nonverbal communication contributed to interactional effectiveness. The study also revealed that test-takers seem to have more opportunities to demonstrate their interactional ability in paired task formats than in interview formats. Pedagogical implications are also provided. 2 of 18 | VO of L2 speaking assessment raises questions about what L2 speaking is or what constitutes L2 speaking ability.L2 speaking ability is defined as "the use of oral language to interact directly and immediately with others … with the purpose of engaging in, acquiring, transmitting, and demonstrating knowledge" (Jamieson, Eignor, Grabe, & Kunnan, 2008, p. 74). Based on this definition, Ockey and Li (2015) defined the construct of speaking based on four components: interactional competence (IC), appropriate use of phonology, appropriate and accurate use of vocabulary and grammar, and appropriate fluency.Many task types have been created to assess the construct of oral communication. One of the widely used task types is the oral proficiency interview. In an interview assessment, test-takers interact with a language tester who conducts the interview based on a predetermined protocol. In this interaction, the language tester asks questions and the test-taker gives answers. However, this interview format is not likely to elicit authentic discourse as in a conversation (Van Lier, 1989). It is also unlikely to measure some aspects of IC, like taking turns, opening and closing gambits, and developing topics with appropriate pragmatic use (Ockey & Li, 2015).To that end, the development of more authentic task types, such as paired or group assessments, where two or more test-takers engage in a task together without an examiner's involvement, is necessary. This is important because "the oversimplified view on human interactions taken by the proficiency movement can impair and even prevent the attainment of true interactional competence within a cross-cultural framework and jeopardize our chances of contributing to interactional understanding" (Kramsch, 1986, p. 367). Thus, it has been argued that IC should be explicitly incorporated into the concept of communicative competence (Kramsch, 1986;He & Young, 1998). Given that speaking tests, such as Cambridge English Qualifications, have evolved to capture interaction, a definition...
The paired and group oral assessment formats involve candidates interacting together to perform a task while one or more examiners observe their performances and rate their language proficiency. Communicative language teaching led to the popularity of pair work and group work in the language classroom, and so too has pair and group work become more widespread in communicative approaches to assessment. The five examinations of the Cambridge Main Suite are particularly well known for incorporating paired tasks. Examples of tasks utilized in these exams include candidates discussing color photographs, constructing a story together when each speaker knows half of it, or making a joint decision on an issue presented in the task material. The College English Test‐Spoken English Test (CET‐SET) in China is perhaps the most large‐scale use of group tasks. Here, the task is for a group of three or four candidates to discuss a topic among themselves. Paired and group oral assessments are also used in schools and universities for placement testing, progress monitoring, and exit testing or matriculation, and are the subject of much validation research.
This study describes the development process and examines the construct validity of an English placement test of oral communication (EPT OC) developed at a Midwestern university in the United States. This test includes a one‐on‐one oral interview and paired discussion task, and test performance is judged on an analytic rating scale. A confirmatory factor analysis conducted on the ratings of 338 students who took the initial fully operational EPT OC revealed the test structure was represented by a correlated four‐factor model with interactional competence, fluency, pronunciation/comprehensibility, and grammar/vocabulary as sub‐constructs, in line with its targeted theoretical framework. Both tasks were effective in measuring the targeted sub‐constructs, but the sub‐constructs were not sufficiently distinct from each other to completely justify a four‐factor model. The findings provide some support for the proposed interpretations of the EPT OC test scores but indicate the need for some modifications to the assessment, such as more thorough rater training and/or revised rating scales to better distinguish the targeted sub‐constructs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.