Through addressing particular ideologies regarding language, meaning, level of proficiency and target writer and reader, rating criteria define and control the what and how of the assessment process. But a point which has been neglected or intentionally concealed due to concerns of practicality and the legitimacy of the native authority in setting assessment guidelines in EFL writing assessment contexts is the appropriateness of the scale. To raise attention to the current vague rating situation and consequently remedy the state, present study followed two lines of argument. First, drawing on the socio-cognitive framework of Weir (2005) for validating writing assessment, it is discussed that the important characteristic of scoring validity necessitates an appropriate choice of rating rubrics. Second, through posing a critical argument, deficiencies of the present practice of adopting rating scales are revealed and consequently it is discussed how assessment circles in native countries by setting rating standards control and dominate the whole process of writing assessment. To add more flesh to the argument, the ESL Composition Profile of Jacobs, et Keywords: writing assessment, academic writing, rating scale, validity, construct validity, ESL Composition Profile (Jacobs, et al., 1981).
IntroductionWithin the past few decades, writing assessment has been a constant concern to the extent that any new publications on written composition have some references to the issues related on evaluating writing. Due to the ascending importance of writing among all sections of the present modern society that values written communication as an index of educational growth, pronouncing judgment on a piece of writing text has found a significant place (Gere, 1980). However, assessing writing faces challenges on two major frontiers: on the one hand, program-level decisions regarding placement in different levels of a course or admission purposes necessitates a rigorous assessment plan, and on the other hand Pandora's Box of performance assessment reveals itself in the writing (Mc Namara, 1996) as there are still vague grounds in the articulation of a sound and explicit basis in scoring writing (Gere, 1980).The ability to make sound decisions about the writing ability of individual writers is the de facto function expected from the scoring procedures involved. Therefore, any malfunctioning in the writing assessment might pop up this basic but critical question in mind: do scoring procedures work correctly to accomplish their expected purpose in providing a sound appraisal of writers' writing ability? Inspired by the above line of inquiry, the present study proceeds to give a second thought to the procedures of writing assessment. In this line, the venerable tradition of using rating scales in writing assessment is investigated. Upon contextualizing the concept of rating scale in its theoretical background and analyzing the value-laden nature of the scales involved, the writer proceeds to underscore the appropriateness of r...