In this study we investigated the potential for a shared-first-language (shared-L1) effect on second language (L2) listening test scores using differential item functioning (DIF) analyses. We did this in order to understand how accented speech may influence performance at the item level, while controlling for key variables including listening passages, item type, and the degrees of intelligibility, comprehensibility, and speaker accentedness. A total of 386 undergraduate and graduate students from China, Korea, and India, who were enrolled in a university in the United States, each took two listening tests. In the first session, they took a standardized listening comprehension test comprising texts recorded by native English speakers. In the second session, they took a listening comprehension test consisting of counterbalanced sets of American English-, Indian-, and Chinese-accented lectures. The results show that the shared-L1 effect is minimal. Effects are consistent for only a few narrow, detail-oriented items, on which Chinese and Korean listeners performed relatively poorly when listening to Indian speakers.
Collaborative text reconstruction tasks such as dictogloss have been suggested as effective second language (L2) learning tasks that promote meaningful interaction between learners and their awareness of L2 target grammatical structures. However, it should be noted that the effect of pair interaction on the final product may differ depending on co-participant characteristics and particularly on proficiency disparities between partners. To date, most studies conducted on the effect of the different L2 proficiency of learners on paired performance have focused on the ways in which language learners interact, and the quantity and quality of language-related episodes (LREs) produced (Kim & McDonough, 2008; Leeser, 2004), often sidelining learners’ actual task performance. This study thus aims to investigate the extent to which partner L2 proficiency levels affect tangible language performance, particularly in terms of content accuracy in a dictogloss task. Results show large gains in idea units reproduced between first and second stages of the dictogloss across texts. However, while low-level students paired with high-level partners benefited most, this group also had the largest variation across the board and, overall, proficiency pairing did not systematically affect improvement in idea units. Idea unit analyses indicated that students tended to perform better on idea units from earlier parts of the text, and that some types of idea units were more discriminatory than others.
In language programs, it is crucial to place incoming students into appropriate levels to ensure that course curriculum and materials are well targeted to their learning needs. Deciding how and where to set cutscores on placement tests is thus of central importance to programs, but previous studies in educational measurement disagree as to which standard-setting method (or methods) should be employed in different contexts. Furthermore, the results of different standard-setting methods rarely converge on a single set of cutscores, and standard-setting procedures within language program placement testing contexts specifically have been relatively understudied. This study aims to compare and evaluate three different standard-setting procedures-the Bookmark method (a test-centered approach), the Borderline group method (an examinee-centered approach), and cluster analysis (a statistical approach)-and to discuss the ways in which they do and do not provide valid and reliable information regarding placement cutoffs for an intensive English program at a large Midwestern university in the USA. As predicted, the cutscores derived from the different methods did not converge on a single solution, necessitating a means of judging between divergent results. We discuss methods of evaluating cutscores, explicate the advantages and limitations associated with each standard-setting method, recommend against using statistical approaches for most English for academic purposes (EAP) placement contexts, and demonstrate how specific psychometric qualities of the exam can affect the results obtained using those methods. Recommendations for standard setting, exam development, and cutscore use are discussed.
The release of a new edition of the widely used Japanese textbook Genki has been widely anticipated. While most of the changes are subtle and aesthetic, there are noticeable improvements, especially in the efforts taken to better represent the diversity of the Japanese learner population. However, the textbook's unsystematic treatment of vocabulary and grammar reflect a teaching philosophy that, already dated when the first edition was published twenty years ago, has only drifted further from the field of second language pedagogy. The disjointed treatment of literacy and oracy skills make it difficult to understand what is "integrated" about the course it provides. Great teaching can still be achieved using Genki, but it will require considerable creativity on behalf of teachers, especially those concerned with demonstrating how progress through the textbook aligns with measurable outcomes or gains in proficiency.
Our study proposes the use of a free classification task for investigating the dimensions used by listeners in their perception of nonnative sounds and for predicting the perceptual discriminability of nonnative contrasts. In a free classification task, participants freely group auditory stimuli based on their perceived similarity. The results can be used to predict discriminability and can be compared to various acoustic or phonological dimensions to determine the relevant cues for listeners. The viability of this method was examined for both a segmental contrast (German vowels) and a nonsegmental contrast (Finnish phonemic length). Perceptual similarity data from the free classification task accurately predicted discriminability in an oddity task and separately provided rich information on how the perceptual space is shaped. These results suggest that a free classification task and related analyses are informative and replicable methods for examining nonnative speech perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.