ELTeach is an online professional development program consisting of two courses, English‐for‐Teaching and Professional Knowledge for English Language Teaching (ELT). Each course includes a coordinated assessment leading to a score report and certificate for individual teachers. Developed with reference to international and national teaching standards, and drawing on the resources of various national English‐language curricula and teaching materials, the program is designed to ensure that teachers have the functional classroom English language and the professional knowledge to support the implementation of the English‐language curricula they are expected to teach. In this document, we outline the design and development process followed in creating the two ELTeach assessments—the Test of English‐for‐Teaching (TEFT™) and the Test of Professional Knowledge (TPK™). The paper is organized into the following sections: (a) the purposes and intended uses for test scores/certificates, (b) the target populations, (c) the construct definitions, (d) the program frameworks and domain descriptions, (e) the test designs, (f) the scoring systems, and (g) directions for research in support of the program.
This paper presents the theoretical and empirical foundations of the TOEFL Junior® assessment and its development process. The TOEFL Junior test was developed to address the increasing need for objective measures of English language proficiency for young adolescent learners, who are being introduced to English as a second or foreign language at a much younger age than ever before. This paper presents the test purposes and intended uses, target population, target language use domains, and test constructs of the TOEFL Junior test. Also included is a description of the overall test structure and scoring system, which demonstrates how the constructs are operationalized. Finally, we outline research topics to support the interpretive argument of the use of the test. This document is expected to serve as a reference point during investigations of validity evidence to support the intended test uses over time.
This paper is the first in a series from Educational Testing Service (ETS) concerning English language proficiency (ELP) assessments for K–12 English learners (ELs). The goal of this paper, and the series, is to present research‐based ideas, principles, and recommendations for consideration by those who are conceptualizing, developing, and implementing ELP assessments for K–12 ELs and by all stakeholders in their education and assessment. We also hope to contribute to the active current discussion in the field on improving the instruction and assessment of EL students as well as contribute to bringing these ideas into practice. This paper articulates a high‐level vision for a next‐generation assessment system serving K–12 English learners with a brief overview of the current state of the art for K–12 ELP assessments and the context of current reforms. In addition, the paper discusses the role of digital technology in next‐generation K‐12 ELP assessment systems, noting the distinct conceptual and practical advantages that digital technology offers in assessing ELP as well as principles for evaluating the potential challenges of implementing a digitally based assessment system against these benefits. A proposed theory of action for K–12 ELP assessment systems is also discussed in this paper. The second paper in the series addressed accessibility issues in the context of ELP assessments for ELs and ELs with disabilities (Guzman‐Orth, Laitusis, Thurlow, & Christensen, 2016), the third paper addressed issues related to summative ELP assessments that emerged from the presentations and discussions at the English Language Proficiency Assessment Research working meeting (Wolf, Guzman‐Orth, & Hauck, 2016), and the fourth paper focused on a key concern within such systems—the initial identification and classification of ELs (Lopez, Pooler, & Linquanti, 2016).
No abstract
Since its 1947 founding, ETS has conducted and disseminated scientific research to support its products and services, and to advance the measurement and education fields. In keeping with these goals, ETS is committed to making its research freely available to the professional community and to the general public. Published accounts of ETS research, including papers in the ETS Research Report series, undergo a formal peer-review process by ETS staff to ensure that they meet established scientific and professional standards. All such ETS-conducted peer reviews are in addition to any reviews that outside organizations may provide as part of their own publication processes. Peer review notwithstanding, the positions expressed in the ETS Research Report series and other published accounts of ETS research are those of the authors and not necessarily those of the Officers and Trustees of Educational Testing Service.The Daniel Eignor Editorship is named in honor of Dr. Daniel R. Eignor, who from 2001 until 2011 served the Research and Development division as Editor for the ETS Research Report series. The Eignor Editorship has been created to recognize the pivotal leadership role that Dr. Eignor played in the research publication process at ETS. ETS Research Report Series ISSN 2330-8516 R E S E A R C H R E P O R T Improving Content Assessment for English Language Learners: Studies of the Linguistic Modification of Test ItemsJohn W. Young, Teresa C. King, Maurice Cogan Hauck, Mitchell Ginsburgh, Lauren Kotloff, Julio Cabrera, & Carlos CavalieEducational Testing Service, Princeton, NJ This article describes two research studies conducted on the linguistic modification of test items from K-12 content assessments. In the first study, 120 linguistically modified test items in mathematics and science taken by fourth and sixth graders were found to have a wide range of outcomes for English language learners (ELLs) and non-ELLs, with regard to performance on the original and modified versions of the same items. The original items were disclosed items from two states and were used in their standards-based assessments. The modified versions of these items were developed by a team of Educational Testing Service (ETS) assessment developers and researchers. The ETS research team identified several kinds of modifications that appeared promising and applied the modifications systematically. Ideally, modifying items leads to improved performance by ELLs while having little to no impact on the performance of non-ELLs. However, both groups of students performed better on some items, about the same on some other items, and worse on yet some other items. Cognitive interviews were conducted as a follow-up study to investigate which features of the linguistically modified items produced the observed outcomes for ELLs and non-ELLs. The results from this study added little clarity with regards to which of the linguistic modifications were effective and how they improved the understanding and performance of ELLs on these content items.Keywords En...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.