Toward a New Theory of Writing Assessment Many composition teachers and scholars feel frustrated by, cut off from, or otherwise uninterested in the subject of writing assessment-especially assessment that takes place outside of the classroom for purposes of placement, exit, or program evaluation. This distrust and estrangement are understandable, given the highly technical aspects of much discourse about writing assessment. For the most part, writing assessment has been developed, constructed, and privatized by the measurement community as a technological apparatus whose inner workings are known only to those with specialized knowledge. Consequently, English professionals have been made to feel inadequate and naive by considerations of technical concepts like validity and reliability. At the same time, teachers have remained skeptical (and rightly so) of assessment practices that do not reflect the values important to an understanding of how people learn to read and write. It does not take a measurement specialist to realize that many writing assessment procedures have missed the mark in examining students' writing ability. At the core of this inability to communicate are basic theoretical differences between the measurement and composition communities (White, "Language"). Writing assessment procedures, as they have been traditionally constructed, are designed to produce reliable (that is, consistent) numerical scores of individual student papers from independent judges. Traditional writing assessment practices are based upon classical test theory, with roots in a positivist epistemology that assumes "that there exists a reality out there, driven by immutable natural laws" (Guba 19). The assumption is that student ability in writing, as in anything else, is a fixed, Brian Huot teaches graduate and undergraduate courses on writing at the University of Louisville, where he directs the composition program. He is coeditor of Assessing Writing, the only journal devoted to writing assessment. This essay culminates several years of thinking, talking, and writing about the need to articulate the theories behind writing assessment practices.
This article attempts to describe the condition of direct writing assessment literature. Instead of focusing on a particular assessment concept, issue or methodology, this review reflects the concerns evident within the bulk of work done on writing assessment since its adoption during the last fifteen years. The purpose of this work is to provide an overall sense of how assessment research defines the important issues and creates the trends that seek to inform efficient and accurate writing assessment procedures. Focusing on topic selection and task development, the relationship between textual features and quality ratings, and the influences upon raters'judgments of writing quality, this essay presents direct writing assessment's own preoccupations and concerns. The center of attention is not only on how direct evaluation has progressed but where it is heading. This large picture of direct writing assessment, available through an examination of its literature, is important to an understanding of direct writing assessment as the primary instrument in making decisions about the quality of student writing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.