It has been claimed that the first language (L1) optimal listening rate (LR) is comparable to the reading rate (RR) of college students if the material is relatively easy (e.g., Hausfeld, 1981). However, it is questionable whether these two rates are comparable for second language (L2) learners who have not had the same amount of exposure to spoken English as L1 learners. This study seeks to find the answers to this question by establishing and examining the relationship between the LR and RR of 56 Japanese college students of English at different proficiency levels. Experimental results showed that optimal LRs and RRs are also similar among English as a foreign language (EFL) learners. However, a majority of the less proficient learners in the study encountered considerable difficulty in listening comprehension. Consequently, it was difficult to estimate their optimal LRs. Important pedagogical implications for English teaching and learning are discussed.IN RECENT YEARS, THE ROLE OF TEMPORAL variables in facilitating listening comprehension has been discussed by many researchers in the field of second language acquisition (SLA). One of the important temporal variables that affects comprehension is listening rate (or speech rate; see Griffiths, 1991). Anderson-Hsieh and Koehler (1989) claimed that even for native speakers of English, the rate of speech is a critical factor, especially for the comprehension of heavily accented English speech. Schmidt-Rinehart (1994) reported that 40% of the university students studying Spanish in her study felt that the most difficult part of listening to academic lectures was the "linking" concept in pronunciation, and 14% of them felt that the rate of speech had the greatest effect on comprehensibility. A survey conducted by Powers (1985) also revealed that among various specific listening activities, teachers perceived nonnative speakers as having greater difficulty in following lectures at faster rates. As a result of the apparent sensitivity of nonnative speakers to speech rate, Griffiths (1991) stressed that rate variation can be incorporated into listening comprehension teaching methodology through the use of speech compression and expansion.Also, the cognitive processes involved in listening and reading comprehension seem to be quite similar. For example, O' Malley and Chamot (1990) claimed that both listening and reading comprehension are viewed theoretically as active processes in which individuals focus on selected aspects of either aural or visual input, construct meaning from passages, and relate what they hear or read to existing knowledge. These similarities imply that there may well be a strong relationship between the LRs and RRs of EFL learners.In this respect, research on EFL learners' LRs (or speech rates) should not always be treated separately from RRs. Whereas little is as yet known about the direct relationship between the LRs and RRs of EFL learners, this study aims to investigate the relationship between EFL learn-
In recognition of the rating scale as a crucial tool of performance assessment, this study aims to establish a rating scale suitable for a Story Retelling Speaking Test (SRST), which is a semi-direct test of speaking ability in English as a foreign language (EFL) for classroom use. To identify an appropriate scale, three rating scales, all of which have been designed to have diagnostic functions, were developed for the SRST and compared in terms of their reliability, validity, and practicality. The three scales were: (a) an empirically derived, binary-choice, boundary-definition (called EBB1) scale, which has four criteria (Communicative Efficiency, Content, Grammar & Vocabulary, and Pronunciation); (b) an EBB2 scale that was modified from the EBB1 scale and has three criteria (Communicative Efficiency, Grammar & Vocabulary, and Pronunciation); and (c) a multiple-trait (MT) scale that was modified from the EBB2 but has a conventional analytic scale format. The results of the comparison revealed that the EBB2 was the most reliable and valid measure for assessing speech performance in the context of story retelling. However, the MT was shown to be the most practical, while the EBB2 permits more careful scoring, which suggests the influence of the rating scale format on test qualities. INTRODUCTIONThere is a growing awareness of teachers' responsibility to assess their students' learning and also of the impact that assessment has on learning (e.g., Hill & McNamara, 2012). Thus, this study focuses on the development of a rating scale for classroom assessment. Among a variety of factors affecting the assessment of speaking performance, such as raters, rating scales, interlocutors, elicitation tasks, and test-taker proficiency (Fulcher, 2003;Luoma, 2004), rating scales have been especially scrutinized because they "provide an operational definition of a linguistic construct" (Fulcher, 2003, p. 89) and should properly reflect a construct, or what we intend to assess (McNamara, 1996). In this regard, developing valid and reliable rating scales is of great importance in successfully assessing speaking performance.In addition, one of the greatest challenges in performance assessment is practicality.Rating procedures often take a large amount of time by requiring teachers to listen to student performances individually. Moreover, the use of commercially available speaking tests imposes a financial burden on the students. For that reason, classroom teachers are reluctant to use such tests to assess classes of about 40 students (e.g., Honda, 2007). In this regard, time-and cost-effectiveness are particularly important for tools used in practical classroom assessment.The speaking test for which the scale is being created is the Story Retelling Speaking Test (SRST), a user-friendly, semi-direct speaking test that uses an integrated reading-to-retell task developed for classroom use by the authors (see the "Procedure of the SRST" section and Appendix A; Hirai & Koizumi, 2009). On the basis of the results of the questionnaire us...
The present study aims to clarify the effects of study abroad (SA) duration and predeparture proficiency on the second language (L2) progress of Japanese students of English. As a first step toward this goal, studies on SA of one month or less (short-term), of more than one month to less than six months (middle-term), and of six months or more (long-term) were reviewed extensively. Next, 31 studies, all of which reported SA students' pre-and post-test scores, were selected, and effect sizes of the students' L2 gains were generated to allow for further comparisons among the three lengths of SA and among three proficiency levels based on their pre-test scores that were carried out by means of a meta-analysis method. The results showed that the magnitude of the effect of long-term SA was more than twice as great as that of middle-term SA and more than four times as great as that of short-term SA. The second factor analyzed in this study, students' predeparture proficiency, did not seem to be an influential predictor of L2 gains. However, further analysis revealed that there was an interaction between the two factors, and low proficiency students tended to attend shorter-term SA programs.
Note. Productive vocabulary size estimates (lemma count) were derived using the following formulae:
Arnong different types of rating scales in scoring speaking perforrriance, the EBB (Empirically derived, Binary-choice, Boundary-definition) scale is claimed to be easy to use and highly reliable (Turneti & Upshur, 1996; 2002). Howeveg it has been questioned whether the EBB scale can be applied to other tasks. Thus, in this study, an EBB scale was compared with ari analytic scale in・terms of validityl reliability} and practicality, Fifty-two EFL learners were asked to read and retell four stories in a semi-direct Story Retelling Speaking [Ilest (SRST). Their pe fbrmances were scored using these two radng scales, and
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.