This study examined the relationship between scores on the TOEFL Internet-Based Test (TOEFL iBT ® ) and academic performance in higher education, defined here in terms of grade point average (GPA). The academic records for 2594 undergraduate and graduate students were collected from 10 universities in the United States. The data consisted of students' GPA, detailed course information, and admissions-related test scores including TOEFL iBT, GRE, GMAT, and SAT scores. Correlation-based analyses were conducted for subgroups by academic status and disciplines. Expectancy graphs were also used to complement the correlation-based analyses by presenting the predictive validity in terms of individuals in one of the TOEFL iBT score subgroups belonging to one of the GPA subgroups. The predictive validity expressed in terms of correlation did not appear to be strong. Nevertheless, the general pattern shown in the expectancy graphs indicated that students with higher TOEFL iBT scores tended to earn higher GPAs and that the TOEFL iBT provided information about the future academic performance of non-native English speaking students beyond that provided by other admissions tests. These observations led us to conclude that even a small correlation might indicate a meaningful relationship between TOEFL iBT scores and GPA. Limitations and implications are discussed.
Data from 787 international undergraduate students at an urban university in the United States were used to demonstrate the importance of separating a sample into meaningful subgroups in order to demonstrate the ability of an English language assessment to predict the first-year grade point average (GPA). For example, when all students were pooled in a single analysis, the correlation of scores from the Test of English as a Foreign Language (TOEFL) with GPA was .18; in a subsample of engineering students from China, the correlation with GPA was .58, or .77 when corrected for range restriction. Similarly, the corrected correlation of the TOEFL Reading score with GPA for Chinese business students changed dramatically (from .01 to .36) when students with an extreme discrepancy between their receptive (reading/listening) and productive (speaking/writing) scores were trimmed from the sample.
Accurate placement within levels of an ESL program is crucial for optimal teaching and learning. Commercially available tests are commonly used for placement, but their effectiveness has been found to vary. This study uses data from the Ohio Program of Intensive English (OPIE) at Ohio University to examine the value of two commercially available tests (the TOEFL ITP and the Michigan EPT) and a locally developed writing test for accurate placement decisions. Placement accuracy was measured in terms of the relationship between test scores and (1) appropriate placement levels for individual students according to their teachers, and (2) student performance in the classes. Findings support the continued use of multiple measures for more accurate placement decisions in the study context. However, the relationship between test scores and student performance, measured by students’ grades in the actual course levels and their success in advancing to a higher course level as additional indicators of the extent to which placement tests provide an accurate indication, was weak when analyzed through multiple regression and cross-tabulation, suggesting that factors other than initial proficiency are primarily determinative of student success when students have been accurately placed.
A common use of language tests is to support decisions about examinees such as placement into appropriate classes. Research on placement testing has focused on English for Academic Purposes (EAP) in higher education contexts. However, there is little research exploring the use of language tests to place students in English as a Second Language (ESL) support classes in secondary education. The present study examined the relationship between secondary school students’ language test scores from a standardized English-language test and the placement of these students into ESL classes by their language teachers. Ninety-two ESL students in two English-medium schools took TOEFL® Junior™ Standard. For the same students, data collection also included teachers’ judgments regarding the ESL classes the students should attend. Strong correlations between test scores and the teacher-assigned ESL levels were found. Moreover, the results from the logistic regression analysis indicated a great degree of overlap between the teacher-assigned ESL levels and the levels predicted from the TOEFL Junior Standard scores. The findings of this study provide some preliminary evidence to support the use of TOEFL Junior Standard as an initial screening tool for ESL placement. The limitations and implications of these findings for ESL placement decisions in secondary education are also discussed.
This study examined the influence of prompt characteristics on the averages of all scores given to test taker responses on the TOEFL iBTTM integrated Read-Listen-Write (RLW) writing tasks for multiple administrations from 2005 to 2009. In the context of TOEFL iBT RLW tasks, the prompt consists of a reading passage and a lecture. To understand characteristics of individual prompts, 107 previously administered RLW prompts were evaluated by participants on nine measures of perceived task difficulty via a questionnaire. Because some of the RLW prompts were administered more than once, multilevel modeling analyses were conducted to examine the relationship between ratings of the prompt characteristics and the average RLW scores, while taking into account dependency among the observed average RLW scores and controlling for differences in the English ability of the test takers across administrations. Results showed that some of the variation in the average RLW scores was attributable to differences in the English ability of the test takers that also varied across administrations. Two variables related to perceived task difficulty, distinctness of ideas within the prompt and difficulty of ideas in the passage, were also identified as potential sources of variation in the average RLW scores.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.