Changes to oral proficiency instruction and assessment in post‐secondary foreign language programs grew out of the proficiency movement of the 1970s and 1980s. The Oral Proficiency Interview (OPI) became the major approach to oral proficiency assessment in the United States. Initially developed for government use, the OPI was originally rated according to the Interagency Language Roundtable Guidelines. Over time, the ACTFL Proficiency Guidelines‐Speaking were developed for use with the OPI in academic settings, particularly at the post‐secondary level. In this paper, we discuss the strengths and limitations of the OPI and identify current controversies related to its use at the post‐secondary level. In addition, we explore new approaches to oral proficiency assessment, including computer‐mediated oral proficiency testing. We also examine the expected proficiency outcomes for foreign language students at different levels, an area that has been little researched. Finally, we recommend ways to increase the formal use of oral proficiency assessment and establish and publicize realistic expectations of outcomes for programs, instructors and students.
The TOEFL iBT® test presents test takers with tasks meant to simulate the tasks required of students in English‐medium universities. Research establishing the validity argument for the test provides evidence for score interpretation and the use of the test for university admissions and placement. Now that the test has been operational for several years, additional evidence is needed to support the validity argument, as well as to identify directions for future research or changes to the test. To address this need, this study examines the extent to which students, instructors, and university administrators understand and agree with the construct of academic language underlying TOEFL iBT tasks.
A central purpose of any test is to convey information to stakeholders about examinees' performances. Scoring criteria allow test scorers, also called raters, to score a test reliably and consistently with its purpose and uses. Score reports provide a bridge from the test results to the real‐world decisions made on the basis of these results. Scoring criteria and score reports must be communicated to both technical and nontechnical audiences in meaningful ways, which often means “translating” jargon and specific testing terms into comprehensible language. This chapter discusses types of scoring scales, approaches to scale development, and considerations for score reporting. In particular, this chapter focuses on how test developers can write scoring criteria and score reports that convey the information to test users in ways that are both accurate and understandable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.