This paper undertakes a review of the literature on writing cognition, writing instruction, and writing assessment with the goal of developing a framework and competency model for a new approach to writing assessment. The model developed is part of the Cognitively Based Assessments of, for, and as Learning (CBAL) initiative, an ongoing research project at ETS intended to develop a new form of kindergarten through Grade 12 (K-12) assessment that is based on modern cognitive understandings; built around integrated, foundational, constructedresponse tasks that are equally useful for assessment and for instruction; and structured to allow multiple measurements over the course of the school year. The model that emerges from a review of the literature on writing places a strong emphasis on writing as an integrated, socially situated skill that cannot be assessed properly without taking into account the fact that most writing tasks involve management of a complex array of skills over the course of a writing project, including language and literacy skills, document-creation and document-management skills, and critical-thinking skills. As such, the model makes strong connections with emerging conceptions of reading and literacy, suggesting an assessment approach in which writing is viewed as calling upon a broader construct than is usually tested in assessments that focus on relatively simple, on-demand writing tasks.
The authors conducted a large‐scale survey to confirm that the writing skills being assessed in the GRE® General Test can be linked to writing tasks that were judged to be important by graduate faculty from a variety of subject areas and across a wide range of institutions at both the graduate and undergraduate levels. The results obtained in this study provide an additional source of validity evidence for using the GRE Analytical Writing Assessment when making admission decisions for graduate school and are also useful in evaluating its relevance for use as an outcomes measure for upper‐division undergraduates.
A comprehensive review was conducted of writing research literature and writing test program activities in a number of testing programs. The review was limited to writing assessments used for admission in higher education. Programs reviewed included ACT, Inc.'s ACT™ program, the California State Universities and Colleges (CSUC) testing program, the College Board's SAT® program, the Graduate Management Admissions Test® (GMAT®) program, the Graduate Record Examinations® (GRE®) test program, the Law School Admission Test® (LSAT®) program, the Medical College Admission Test® (MCAT®) program, and the Test of English as a Foreign Language™ (TOEFL®) testing program. Particular attention was given in the review to writing constructs, fairness and group differences, test reliability, and predictive validity. A number of recommendations are made for research on writing assessment.
A study was undertaken to determine the effects on essay scores of intermingling handwritten and word‐processed versions of student essays. A sample of examinees, each of whom had produced both a handwritten and a word‐processed essay, was drawn from a larger sample of students who had participated in a pilot study of item types being considered for the new academic skills assessments of The Praxis Series: Professional Assessments for Beginning Teachers™.
Students' original handwritten essays were converted to word‐processed versions, and their original word‐processed essays were converted to handwritten versions. In a preliminary study, handwritten and word‐processed essays were then intermingled and rescored.
Analyses revealed higher average scores for essays scored in the handwritten mode than for essays scored as word‐processed, regardless of the mode in which essays were originally produced. Several hypotheses were advanced to explain the discrepancies between scores on handwritten and word‐processed essays. The training of essay readers was subsequently modified on the basis of these hypotheses, and the experiment was repeated using the modified training with a new set of readers.
The results of this second study showed an average reduction of about 25% in the discrepancy between essays read in the handwritten and word‐processed modes, compared with the results of the initial study. The effects computed in the second experiment were small by most standards, and predicted to have very little if any impact on certification decisions. Nonetheless, a recommendation is made not only to adopt the modified training but also to monitor this effect throughout the operational scoring sessions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.