The assessment of writing ability has recently recevied much attention from educators, legislators, and measurement experts, especially because the writing of students in all disciplines and at all educational levels seems, on the whole, less proficient than the writing produced by students five or ten years ago. The GRE Research Committee has expressed interest in the psychometric and practical issues that pertain to the assessment of writing ability. This paper presents not a new study but a review of major research in light of GRE Board concerns. Specifically, recent scholarship and information from established programs are used to investigate the nature and limitations of essay and multiple-choice tests of writing ability, the statistical relationship of performances on these types of tests, the performance of population subgroups on each kind of task, the possible need of different disciplines for different tests of composition skill, and the cost and usefulness of various strategies for evaluating writing ability.The literature indicates that essay tests are often considered more valid than multiple-choice tests as measures of writing ability. Certainly they are favored by English teachers. But although essay tests may sample a wider range of composition skills, the variance in essay test scores can reflect such irrelevant factors as speed and fluency under time pressure or even penmanship. Also, essay test scores are typically far less reliable than multiple-choice test scores. When essay test scores are made more reliable through multiple assessments, or when statistical corrections for unreliability are applied, performance on multiple-choice and essay measures can correlate very highly. The multiple-choice measures, though, tend to overpredict the performance of minority candidates on essay tests. It is not certain whether multiple-choice tests have essentially the same predictive validity for candidates in different academic disciplines, where writing requirements may vary. Still, at all levels of education and ability, there appears to be a close relationship between performance on multiple-choice and essay tests of writing ability. And yet each type of measure contributes unique information to the overall assessment. The best measures of writing ability have both essay and multiple-choice sections, but this design can be prohibitively expensive. Cost cutting alternatives such as an unscored or locally scored writing sample may compromise the quality of the essay assessment. For programs considering an essay writing exercise, a discussion of the cost and uses of different scoring methods is included. The holistic method, although having little instructional value, offers the cheapest and best means of rating essays for the rank ordering and selection of candidates.