This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners were randomly divided into two groups: the first one was Direct Feedback Group, receiving feedback on their writing through error correction (DFG) and the other one was Indirect Feedback Group (IFG), receiving feedback in their writing through error detection along with the codes. The learners were taking English classes in a private English center and were supposed to self-correct and hand in their writings when received indirect error feedback. The results suggested that error detection along with the codes led to better improvement in the learners' writing than the error correction treatment.
The current research examines the immediate and delayed effects of three types of corrective feedback, namely recasts, metalinguistic feedback, and clarification requests, on the acquisition of English wh-question forms by Iranian EFL learners. To this end, 134 Iranian EFL learners comprising 4 intact classes participated in the study. Learners in 3 intact classes which were designated as feedback groups received feedback during a meaning-focused task, while learners in the control group received no feedback. The results of data analysis revealed the effectiveness of metalinguistic feedback and recasts in both immediate and delayed post-tests. Further inspection of the results revealed that while metalinguistic feedback was more effective than recasts in the immediate post-test, recasts had a more stable and enduring effect, compared with metalinguistic feedback, on learners' performance in the delayed post-test.
The aim of this study was to investigate the foreign language learning needs of Iranian MA students, in particular those who were majoring in biology, psychology, physical training, accounting and west philosophy. A total of 80 students from five MA majors studying at university of Isfahan participated in the study. Additionally, twenty-five subject-specific instructors as well as seven English instructors took part in the study. The study was designed on qualitative and quantitative survey basis using interviews, questionnaires, and texts. In order to investigate participants' point of views, chi-square test was used to analyze the data. The result obtained revealed that majority of the participants were dissatisfied with the current ESP courses for MA students. Most of the participants asked for an urgent need for revision and reconsideration of English instruction in the Iranian educational system as well as universities, stating that Iranian students do not have enough exposure to English language in a way that help them to fulfill their subjective and objective needs at MA level . Giving more weight to English in the MA entrance exam was suggested as one possible solution. It was thought that this would increase the motivation of the students to improve their language proficiency; furthermore, joint teaching of the ESP courses was suggested as another solution to help students meet their English needs at the MA level.
Through addressing particular ideologies regarding language, meaning, level of proficiency and target writer and reader, rating criteria define and control the what and how of the assessment process. But a point which has been neglected or intentionally concealed due to concerns of practicality and the legitimacy of the native authority in setting assessment guidelines in EFL writing assessment contexts is the appropriateness of the scale. To raise attention to the current vague rating situation and consequently remedy the state, present study followed two lines of argument. First, drawing on the socio-cognitive framework of Weir (2005) for validating writing assessment, it is discussed that the important characteristic of scoring validity necessitates an appropriate choice of rating rubrics. Second, through posing a critical argument, deficiencies of the present practice of adopting rating scales are revealed and consequently it is discussed how assessment circles in native countries by setting rating standards control and dominate the whole process of writing assessment. To add more flesh to the argument, the ESL Composition Profile of Jacobs, et Keywords: writing assessment, academic writing, rating scale, validity, construct validity, ESL Composition Profile (Jacobs, et al., 1981). IntroductionWithin the past few decades, writing assessment has been a constant concern to the extent that any new publications on written composition have some references to the issues related on evaluating writing. Due to the ascending importance of writing among all sections of the present modern society that values written communication as an index of educational growth, pronouncing judgment on a piece of writing text has found a significant place (Gere, 1980). However, assessing writing faces challenges on two major frontiers: on the one hand, program-level decisions regarding placement in different levels of a course or admission purposes necessitates a rigorous assessment plan, and on the other hand Pandora's Box of performance assessment reveals itself in the writing (Mc Namara, 1996) as there are still vague grounds in the articulation of a sound and explicit basis in scoring writing (Gere, 1980).The ability to make sound decisions about the writing ability of individual writers is the de facto function expected from the scoring procedures involved. Therefore, any malfunctioning in the writing assessment might pop up this basic but critical question in mind: do scoring procedures work correctly to accomplish their expected purpose in providing a sound appraisal of writers' writing ability? Inspired by the above line of inquiry, the present study proceeds to give a second thought to the procedures of writing assessment. In this line, the venerable tradition of using rating scales in writing assessment is investigated. Upon contextualizing the concept of rating scale in its theoretical background and analyzing the value-laden nature of the scales involved, the writer proceeds to underscore the appropriateness of r...
Abstract-The present study tried to assess the roles of vocabulary knowledge in reading comprehension of Iranian EFL learners. Using the multivariate analysis, this study examined the roles of depth and breadth of vocabulary knowledge in reading comprehension of a group of Iranian EFL University students with a minimum vocabulary size of 3,000 word families as was measured by Schmitt's (2001) Vocabulary Levels Test. The study found that 1) the test scores on vocabulary breadth, vocabulary depth and reading comprehension are positively correlated and 2) vocabulary breadth was a stronger predictor of reading comprehension than depth of vocabulary knowledge for the participants of the present study.Index Terms-breadth of vocabulary knowledge, depth of vocabulary knowledge, lexical threshold for reading comprehension
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.