Through addressing particular ideologies regarding language, meaning, level of proficiency and target writer and reader, rating criteria define and control the what and how of the assessment process. But a point which has been neglected or intentionally concealed due to concerns of practicality and the legitimacy of the native authority in setting assessment guidelines in EFL writing assessment contexts is the appropriateness of the scale. To raise attention to the current vague rating situation and consequently remedy the state, present study followed two lines of argument. First, drawing on the socio-cognitive framework of Weir (2005) for validating writing assessment, it is discussed that the important characteristic of scoring validity necessitates an appropriate choice of rating rubrics. Second, through posing a critical argument, deficiencies of the present practice of adopting rating scales are revealed and consequently it is discussed how assessment circles in native countries by setting rating standards control and dominate the whole process of writing assessment. To add more flesh to the argument, the ESL Composition Profile of Jacobs, et Keywords: writing assessment, academic writing, rating scale, validity, construct validity, ESL Composition Profile (Jacobs, et al., 1981). IntroductionWithin the past few decades, writing assessment has been a constant concern to the extent that any new publications on written composition have some references to the issues related on evaluating writing. Due to the ascending importance of writing among all sections of the present modern society that values written communication as an index of educational growth, pronouncing judgment on a piece of writing text has found a significant place (Gere, 1980). However, assessing writing faces challenges on two major frontiers: on the one hand, program-level decisions regarding placement in different levels of a course or admission purposes necessitates a rigorous assessment plan, and on the other hand Pandora's Box of performance assessment reveals itself in the writing (Mc Namara, 1996) as there are still vague grounds in the articulation of a sound and explicit basis in scoring writing (Gere, 1980).The ability to make sound decisions about the writing ability of individual writers is the de facto function expected from the scoring procedures involved. Therefore, any malfunctioning in the writing assessment might pop up this basic but critical question in mind: do scoring procedures work correctly to accomplish their expected purpose in providing a sound appraisal of writers' writing ability? Inspired by the above line of inquiry, the present study proceeds to give a second thought to the procedures of writing assessment. In this line, the venerable tradition of using rating scales in writing assessment is investigated. Upon contextualizing the concept of rating scale in its theoretical background and analyzing the value-laden nature of the scales involved, the writer proceeds to underscore the appropriateness of r...
English for specific purposes (ESP), the popular catchphrase of presently English language teaching programs, has been investigated from different perspectives. However, there have been occasional forays in to the role of ESP practitioner as one of the most distinctive features in the literature. In addition to fulfilling the usual role of a language teacher, ESP practitioner may be required to deal with administrative, personnel, cross-cultural, interdisciplinary, curricular, and pedagogical issues that may be unfamiliar to general English teachers (Hutchinson & Waters 1990; Koh 1988; Robinson 1991; Waters 1994). Consequently, those practitioners who have passed the thorny way of professionalization would be great assets for the teachers who are new to the ESP programs. Moreover, lack of an independent disciplinary status for ESP worsens the issue. Drawing on the interview data and observational evidence, present study investigates the route of professionalization of two Iranian ESP teachers in the particular context of Petroleum University of Technology (PUT) in Ahwaz. What they say are critically analyzed in the context and emerging guidelines are categorized accordingly. It is hoped that findings grounded minimally in this particular ESP context will act as broad guidelines that bestow a more academic image to the general English teacher-ESP teacher change in the present ‘no man land’ of Iranian ESP teaching programs.
Along with a more humanitarian movement in language testing, accountability to contextual variables in the design and development of any assessment enterprise is emphasized. However, when it comes to writing assessment, it is found that multiplicity of rating scales developed to fit diverse contexts is mainly headed by well-known native testing agencies. In fact, it seems that EFL/ESL assessment contexts are receptively influenced by the symbolic authority of native assessment circles. Hence, investigating the actualities of rating practice in EFL/ESL contexts would provide a realistic view of the way assessment is conceptualized and practiced. To investigate the issue, present study launched a wide-scale survey in the Iranian EFL writing assessment context. Results of a questionnaire and subsequent interviews with Iranian EFL composition raters revealed that rating scale in its common sense does not exist. In fact, raters relied on their own internalized criteria developed through their long years of practice. Therefore, native speaker legitimacy in the design and development of scales for the EFL context is challenged and the local agency in the design and development of rating scales is emphasized.
This study investigates how differences in rhetoric awareness relate to differences in English as a Foreign Language (EFL) writing strategies employed by learners in a context of an Iranian English language institute. It is hypothesized that learners with extensive exposure to English rhetorical and cultural preferences for essay organization and argument structure resources may have a better command of the nuances of rhetorical structure than those who do not. In order to know if such a difference has any implication for the use of composition strategies, a group of 22 advanced learners in a language institute were asked to take a discourse cloze test designed in a way to measure their awareness of English rhetoric. Upon the completion of the test, two groups (N=10 in each) of English rhetoric aware and rhetoric unaware participants were formed. Later, two learners were randomly selected from each group to verbalize their thoughts when writing an argumentative essay. The analysis of the think aloud protocols along with the following stimulated recall interviews with them revealed noticeable qualitative and quantitative differences in strategy use between the English rhetoric aware and unaware writers. Findings of the study suggest that along with a new turn in contrastive rhetoric studies on the one hand and post-process movement in writing on the other hand, wider perspectives in contrastive rhetoric studies which incorporates process and cognitive views when delineating issues in writing pedagogy should be taken into account.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.